[petsc-dev] use of hash table vs array in various places in PETSc
Barry Smith
bsmith at mcs.anl.gov
Tue Sep 20 12:14:36 CDT 2011
On Sep 20, 2011, at 11:37 AM, Mark F. Adams wrote:
> A type independent scatter would make my code simpler, faster, shorter. I reuse Mat scatters for all sorts of stuff and currently have to call scatter on each variable (stuffing ints in floats).
This doesn't sound like it should be a PETSc operation. It should be an MPI 3 operation. Please don't make VecScatter more complicated than it already is, instead design what you want at the "MPI" level (as if you were adding to the MPI 3 standard) and see what you get. (In fact look at what the fuckers in the MPI 3 are doing and see if they are on the right track)
This
>> PetscScatterCreate(PetscLayout from,IS isfrom,PetscLayout to,IS isto,PetscScatter*);
>>
>> PetscScatterBegin(PetscScatter,void *from,MPI_Datatype,void *to,MPI_Datatype,MPI_Op,ScatterMode);
is a strange combination of PETSc types and MPI types and PETSc styles and MPI styles. The right way to think about what you want is purely in an MPI style.
Barry
>
> On Sep 20, 2011, at 12:22 PM, Matthew Knepley wrote:
>
>> On Tue, Sep 20, 2011 at 8:43 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> On Tue, Sep 20, 2011 at 01:05, Barry Smith <bsmith at mcs.anl.gov> wrote:
>> It may very well be limiting with MatGetSubMatrix() for VI solvers.
>>
>> As I recall, the gather is used to identify which global columns out of the off-diagonal block are needed. Well, the process owning that part of the index set knows, so why not use the scatter for the off-diagonal block to mark the needed columns?
>>
>> This seems quite straightforward when the submatrix is taking a local subset (not moving rows to different processes). This also takes care of the most common use case in fieldsplit.
>>
>>
>> This would be even nicer if we could move indices over the same channels as scalars. This comes up so frequently that I think we should build a type-independent scatter, something like
>>
>> PetscScatterCreate(PetscLayout from,IS isfrom,PetscLayout to,IS isto,PetscScatter*);
>>
>> PetscScatterBegin(PetscScatter,void *from,MPI_Datatype,void *to,MPI_Datatype,MPI_Op,ScatterMode);
>>
>>
>> The VecScatter API could be preserved, but the implementation would become a thin wrapper over this thing.
>>
>> Wait, a type-independent Scatter? The one I have been begging for for 7 years, and finally coded myself (see ParallelMapping.hh) suboptimally. I
>> would like to see VecScatter completely refactored. We need
>>
>> - Flexible types
>> - Flexible combination function
>> - Better linkage to DM
>>
>> Matt
>>
>> --
>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>> -- Norbert Wiener
>
More information about the petsc-dev
mailing list