[petsc-users] petsc and meshless type method: Forming parallel coefficient matrix

Jed Brown jed at 59A2.org
Fri Oct 1 09:57:40 CDT 2010


On Fri, Oct 1, 2010 at 16:34, M. Scot Breitenfeld <brtnfld at uiuc.edu> wrote:
>  Hi, I'm working on implementing petsc in a meshfree type method. I have the
> serial version working with Petsc, but I have some questions about the
> parallel implementation. I guess the easiest way to explain the problem is
> with an example, lets take the 1D problem,
>
> 1    2    3   4    5   6   7   8
> o    o   o   o  | o   o   o  o
>
> Nodal partitioning: proc 0: 1-4
>                              proc 1: 5-8

Minor point; PETSc matrices and vectors use zero-based indexing.

> Now, Proc 0 contributes values to 5 and 6, so they are included in my "local
> numbering" and the total number of local nodes is 6 on proc 0
>         Proc 1 contributes values to 3 and 4, so they are included in my
> "local numbering" and the total number of local nodes is 6 on proc 1
>
>  1     2   3  4   5   6
>  o    o   o  o   o   o                   Proc 0
>
>             o  o   o   o    o   o       Proc 1
>             3  4   5    6    7   8
>
> Each node has 3 dof, so the global coefficient matrix, A,  would be 24x24:
> Processor 0 has rows (1-12) and Processor 1 has rows (13-24)

Presumably you provide this in MatSetSizes (or MatCreateMPI*AIJ)?

> When each processor loops over its nodes (Proc 0: 1-6, Proc 1: 3-4) it adds
> its contribution into A:

Are these indices really what you mean?

> CALL MatSetValues(A, 1, "global row id", "number of column entries", "column
> entries", "values", ADD_VALUES,ierr)
>
> I'm unclear what the best way to create the A matrix is.
>
> For each processor I have its number of global nodes (proc 0: 1-6, proc 1:
> 3-8) so I can get the corresponding rows in the global A that it contributes
> (note that some of the rows are not local to the processor, those values
> need to be sent).

They will be sent during MatAssemblyBegin/End.  There are two common
assembly modes.  Either a process only sets rows that it owns (the
owner of a particle computes the interaction with all its neighbors,
note that there is some redundancy here if the interaction is
symmetric) or you partition the interactions (elements in FEM, fluxes
in FVM) and compute the interaction only once (no redundancy), summing
into the appropriate rows.  The latter involves adding values into
unowned rows.  PETSc matrices perform this communication
automatically.  You can choose which type to use (or some hybrid if
you desire).

You can also choose whether to insert in the local ordering or the
global ordering.  They are equivalent, it's just a matter of what is
most convenient to you.

> Also, for each processor I have a list of the actual global nodes the
> processors owns (proc 0: 1-4, proc 1:5-8).
>
> I can set the mapping with:
>
> CALL ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, "number of local
> elements", "global index of elements", mapping, ierr)
>
> where I'm assuming the "number of local elements" includes the ghost nodes.
> This is essentially what global row this processor will contribute.
>
> How do I tell Petsc what nodes the processor actually owns so it knows what
> nodes need to be sent? Is there another mapping command to use for mapping
> the owned global nodes to the processor and then is there a command that
> takes both maps and figures out what nodes need to be sent.
> Or is the correct procedure to use VecScatterCreate and create these arrays
> myself.

Create local and global vectors, VecScatter will get the values from
wherever the IS says.  You don't have to do anything special to
indicate which indices are owned, that is determined using the Vec.

> Can I still use MatSetValues or do I have to use MatSetValuesLoc (together
> with VecSetLocalToGlobalMapping) to assemble the matrix, or can I still do
> this all using global numbering.

Yes, it's entirely up to you.

> If there is a similar example to what I'm doing that you could point me to
> it would be helpful.

I'm not familiar with a particle example in PETSc.  It might help to
look at an unstructured mesh problem, the linear algebra setup is
quite similar.

Jed


More information about the petsc-users mailing list