[petsc-users] petsc and meshless type method: Forming parallel coefficient matrix

M. Scot Breitenfeld brtnfld at uiuc.edu
Fri Oct 1 09:34:31 CDT 2010


  Hi, I'm working on implementing petsc in a meshfree type method. I 
have the serial version working with Petsc, but I have some questions 
about the parallel implementation. I guess the easiest way to explain 
the problem is with an example, lets take the 1D problem,

1    2    3   4    5   6   7   8
o    o   o   o  | o   o   o  o

Nodal partitioning: proc 0: 1-4
                               proc 1: 5-8

Now, Proc 0 contributes values to 5 and 6, so they are included in my 
"local numbering" and the total number of local nodes is 6 on proc 0
          Proc 1 contributes values to 3 and 4, so they are included in 
my "local numbering" and the total number of local nodes is 6 on proc 1

  1     2   3  4   5   6
  o    o   o  o   o   o                   Proc 0

              o  o   o   o    o   o       Proc 1
              3  4   5    6    7   8

Each node has 3 dof, so the global coefficient matrix, A,  would be 
24x24: Processor 0 has rows (1-12) and Processor 1 has rows (13-24)

When each processor loops over its nodes (Proc 0: 1-6, Proc 1: 3-4) it 
adds its contribution into A:

CALL MatSetValues(A, 1, "global row id", "number of column entries", 
"column entries", "values", ADD_VALUES,ierr)

I'm unclear what the best way to create the A matrix is.

For each processor I have its number of global nodes (proc 0: 1-6, proc 
1: 3-8) so I can get the corresponding rows in the global A that it 
contributes (note that some of the rows are not local to the processor, 
those values need to be sent).

Also, for each processor I have a list of the actual global nodes the 
processors owns (proc 0: 1-4, proc 1:5-8).

I can set the mapping with:

CALL ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, "number of local 
elements", "global index of elements", mapping, ierr)

where I'm assuming the "number of local elements" includes the ghost 
nodes. This is essentially what global row this processor will contribute.

How do I tell Petsc what nodes the processor actually owns so it knows 
what nodes need to be sent? Is there another mapping command to use for 
mapping the owned global nodes to the processor and then is there a 
command that takes both maps and figures out what nodes need to be sent.
Or is the correct procedure to use VecScatterCreate and create these 
arrays myself.

Can I still use MatSetValues or do I have to use MatSetValuesLoc 
(together with VecSetLocalToGlobalMapping) to assemble the matrix, or 
can I still do this all using global numbering.

If there is a similar example to what I'm doing that you could point me 
to it would be helpful.

Thanks for your time,
Scot





More information about the petsc-users mailing list