[petsc-dev] use of hash table vs array in various places in PETSc

Barry Smith bsmith at mcs.anl.gov
Tue Sep 20 18:27:49 CDT 2011


  In MPI land you indicate layout and index information via integer arrays 

  MPI_GraphComm_init( ......ownership on each process, indices , ...., &ctx);

  MPI_GraphComm_start(ctx, pointers to the data, MPI_Datatypes, MPI_reduces, ......);


   Look at the MPI 3 proposals. 


On Sep 20, 2011, at 6:15 PM, Dmitry Karpeev wrote:

> 
> 
> On Tue, Sep 20, 2011 at 5:52 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> 
> On Sep 20, 2011, at 5:44 PM, Dmitry Karpeev wrote:
> 
> >
> >
> > On Tue, Sep 20, 2011 at 5:41 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > On Sep 20, 2011, at 2:06 PM, Jed Brown wrote:
> >
> > > On Tue, Sep 20, 2011 at 19:14, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > >  is a strange combination of PETSc types and MPI types and PETSc styles and MPI styles. The right way to think about what you want is purely in an MPI style.
> > >
> > > I agree that it would naturally belong at the MPI level. I wrote it this way because IS is a useful thing and MPI doesn't have any such object.
> >
> >   MPI has its own kinds of constructs for doing things; you could argue they suck ...
> >
> > > In particular, an aggregate MPI_Datatype encodes the type of each entry, but it's useful to be able to push ints through the same pipes as floats, so we'd end up producing an IS-like thingy in this MPI-add-on layer (or have a rather complicated interface with different creation functions for various combinations). I want the same scatter to be able to move different types (to amortize setup and guarantee consistency), so it's not good enough to specify all the types at the time it's created.
> >
> >  I wanted something that would bark like crazy and attack burglars at my house but I also wanted it to cuddle up on my lap and purr to me so I made a doat.
> >
> > The problem I see with a general scatter is that it has to operate on "laid-out" objects (so that, among other things, the scatter knows where to stick the things that arrive).  Currently, only Vec and Mat have layouts.
> >
> 
>   Custom MPI_Datatypes allow you to put stuff whereever you want? Right?
> 
> Yes, but how is the scatter specified, then?  Currently the indices in the ISs, used to construct the scatter,
> refer, essentially, to some distributed array elements, and PetscLayout tells the scatter how this array is laid out
> across the comm: (1) which proc owns the element, and (2) where in the locally-owned portion of the array to find the element to send and where to stick arriving elements.  I imagine anything is possible with a custom datatype, 
> but then it probably doesn't even resemble VecScatter.
> 
> 
>   Barry
> 
> > Dmitry.
> >
> 
> 




More information about the petsc-dev mailing list