itaps-parallel [Fwd: Re: Parallel Interface philosophical comments]
Tim Tautges
tautges at mcs.anl.gov
Wed Oct 24 14:36:33 CDT 2007
[Forwarding for archival]
-------- Original Message --------
Subject: Re: Parallel Interface philosophical comments
Date: Tue, 23 Oct 2007 16:20:27 -0400
From: Mark Shephard <shephard at scorec.rpi.edu>
To: Tim Tautges <tautges at mcs.anl.gov>
CC: Onkar Sahni <osahni at scorec.rpi.edu>, "Devine, Karen D."
<kddevin at sandia.gov>, Kenneth Jansen <kjansen at scorec.rpi.edu>, Ting
Xie <txie at scorec.rpi.edu>, Vitus Leung <vjleung at sandia.gov>, Carl
Ollivier-Gooch <cfog at mech.ubc.ca>, "Lori A. Diachin"
<diachin2 at llnl.gov>, Jason Kraftcheck <kraftche at cae.wisc.edu>,
"Knupp, Patrick" <pknupp at sandia.gov>
References: <C338DFBB.B5CD%kddevin at sandia.gov>
<471CA0EE.8020904 at mcs.anl.gov>
<56567.128.113.131.145.1193166547.squirrel at www.scorec.rpi.edu>
<471E4FC7.80003 at mcs.anl.gov>
Tim,
I was under the impression at the bootcamp that we agreed that those
that have operational parallel capabilities support it through either
reverse classification (via mesh sets) or classification and that we
would support both approaches through iPart. However, you last couple of
emails appear to go back to the old argument of we must do the minimal
thing and since meshsets (reverse classification) are in there that is
the only way.
By the way, if I understand correctly our existing data-model has the
ability to support reverse classification (meshsets) and classification
through iRel, so should the parallel model. Yes, that means more
functions than only one or the other way, but both approaches exist. (In
addition, I still do not equate functionality and data model - it
someone meets the functionality I really do not give a crap about their
data model.)
There are a number of groups that use classification as the primary mode
of interaction for doing things similar to how the RPI group does. Just
as we would not want you to have redo all your parallel work to do
things strictly from a classification perspective, we do not expect that
the ITAPS parallel interface will require RPI to re-do all its parallel
adaptive work (years worth of work dating back to the mid-90's when we
were the first to do it) that is being used for SciDAC applications to
fit with a specific one way of doing things.
By the way, some feel the classification is a simple and powerful way to
do things and mesh sets are actually substantial extra overhead to a
well defined model of using classification. In my experience, I would
love to have both classification and reverse classification at all
times, since depending on the question being asked, one is a faster way
to answer it. However, due to the extra data and headache of keeping
things correctly set, most I know tend to focus on one and use the other
temporary manner when its advantageous.
Mark
Tim Tautges wrote:
>
>
> Onkar Sahni wrote:
>> Hi all,
>>
>> Here are some of my questions and comments on ITAPS parallel interface:
>>
>> 1) Is there a difference between "instance" and "handle" in ITAPS
>> terminology (or are there any definition for these)? (I may have
>> missed it
>> if this was resolved/defined in previous ITAPS interfaces like iMesh).
>>
>
> We use the generic term "handle" as an identifier for entities, entity
> sets, the interface itself, and tags. In our C headers, these are
> typedef'd to void*. There are also iterators, which we haven't really
> called handles, but could be referred to that way.
>
>> 2) For situations where we want to support multi-parts per
>> processor/process:
>> (a) How do we want to distinguish between processor/process level
>> functionalities against part level ones (keeping user- and
>> implementation-convenience and efficiency in mind)? Ones which would fall
>> under processor level are, like list of parts on processor, MPI_Rank
>> etc.,
>> whereas ones which would fall under part level are, like neighboring
>> parts
>> (not procs.) etc. Further, there could be ones where users wants to
>> get/iterate mesh entities (and get adjacencies) at a processor and/or
>> part
>> level.
>
> Processor-level queries are on the iMesh instance, part-level queries
> are on the entity set representing the part. Our iterator functions
> operate on both (with processor-level queries simply using the root set).
>
>>
>> (b) Assuming a part-handle is supported, how can user/application get
>> access to it (say on a processor where part resides)? Basically will
>> there
>> be a way to directly access the part-handle (like part iterator at
>> iProcParts level) or user has to access it through tags/tag names, in
>> which case user has to figure out what is the tag/tag name for a
>> particular part (and who defines these tags/tag names).
>
> I advocate having both: a conventional tag given to parts, so that you
> can find sets that are parts by looking for sets with that tag; and also
> having a convenience function (as part of the parallel extensions to
> iMesh) which returns all the parts in the given instance (i.e. on the
> processor). Note an important capability here for this function is to
> pass in an optional tag name/handle indicating the partition you're
> asking about. That allows multiple partitions to co-exist, which is
> important for some applications.
>
>>
>> 3) How general we want to be in interface for different data
>> models/implementations (as in future we may want to incorporate/fit other
>> implementations), especially between serial and parallel data. Two
>> possibilities for data models I can think of are: (a) one (serial) data
>> model is enhanced to store parallel information and (b) serial one is
>> supplemented by a separate parallel one to support parallel
>> functionalities (in possibility (b) all parallel-specific information is
>> stored in a separate data to serial-specific data). We could design
>> interface that would fit both possibilities, this is the reason why we
>> included iPartMesh (section 3.4) level functions which are specific to
>> parallel functionalities in mesh (and passing handle to serial data in
>> these interface will not work for second possibility (b)). I do not know
>> how much relevant is this.
>
> I think it's very important to put the parallel data into our existing
> data models, or extend the data model if that can't be done (which I
> doubt, since I've already embedded this information in our current data
> model). This is important because it allows us to use current tools to
> examine parallel data. Note also that one may sometimes be examining
> parallel data in a serial environment (e.g. running a partitioner in
> serial), so we shouldn't restrict accessing parallel data to a parallel
> environment.
>
> - tim
>
>>
>> Let me know if you have questions.
>>
>> Vitus, due to above questions I did not respond to your comments. I do
>> not
>> know if above questions and comments make things clear. If you still have
>> anything specific please let us know.
>>
>> Karen, Is there a way to create an email list for this subcommittee as if
>> we want to add (and remove) people (and it will keep record of all the
>> messages). I have added Ting in this response as she is also working on
>> this stuff from RPI side.
>>
>> Thanks,
>> Onkar
>>
>>> Hi all,
>>> I'd like to re-iterate what I think should be the general behavior of
>>> functions in the parallel interface. IMO, we should take advantage of
>>> the simple yet powerful data model we already have for iMesh. That
>>> means using mesh sets for things which are obviously sets, and using the
>>> functions which already exist for accessing data in those sets. I think
>>> it does make sense in certain cases to make convenience functions in the
>>> parallel interface for accessing e.g. all the sets which represent parts
>>> of a partition, on a current processor or globally, but I think we
>>> should minimize these.
>>> Based on the general behavior described above, I think many of the
>>> functions proposed for global ids and parts/partitions already exist in
>>> iMesh. For example, iPart_getGlobalID, iPart_createGlobalID,
>>> iProcParts_getPartIdArrOnProc, iProcParts iterator functions, etc.
>>>
>>> - tim
>>>
>>> --
>>> ================================================================
>>> "You will keep in perfect peace him whose mind is
>>> steadfast, because he trusts in you." Isaiah 26:3
>>>
>>> Tim Tautges Argonne National Laboratory
>>> (tautges at mcs.anl.gov) (telecommuting from UW-Madison)
>>> phone: (608) 263-8485 1500 Engineering Dr.
>>> fax: (608) 263-4499 Madison, WI 53706
>>>
>>>
>>
>>
>>
>
--
================================================================
"You will keep in perfect peace him whose mind is
steadfast, because he trusts in you." Isaiah 26:3
Tim Tautges Argonne National Laboratory
(tautges at mcs.anl.gov) (telecommuting from UW-Madison)
phone: (608) 263-8485 1500 Engineering Dr.
fax: (608) 263-4499 Madison, WI 53706
More information about the itaps-parallel
mailing list