[petsc-users] newbie question on the parallel allocation of matrices

Treue, Frederik frtr at risoe.dtu.dk
Wed Nov 30 09:59:59 CST 2011


Hi everyone,

Caveat: I have just started using petsc, so the answer to my question may very well be fairly trivial.

I'm trying to run the following bits of code:

DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, DMDA_BOUNDARY_GHOSTED, DMDA_STENCIL_BOX,10,10,PETSC_DECIDE,PETSC_DECIDE,1,1,PETSC_NULL,PETSC_NULL,&da);
[snip]
MatCreate(PETSC_COMM_WORLD,&((*FD).ddx));
  MatSetSizes((*FD).ddx,PETSC_DECIDE,PETSC_DECIDE,100,100);
  MatSetFromOptions((*FD).ddx);

  for (i=0;i<10;i++) {
    col[0]=i*10;col[1]=i*10+1; row[0]=i*10;
    val[0]=1;val[1]=1;
    MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);
    for (j=1;j<10-1;j++) {
      col[0]=i*10+j-1;col[1]=i*10+j+1; row[0]=i*10+j;
      val[0]=-1;val[1]=1;
      MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);
    }
    col[0]=i*10+10-2;col[1]=i*10+10-1; row[0]=i*10+10-1;
    val[0]=-1;val[1]=-1;
    MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);
  }
  MatAssemblyBegin((*FD).ddx,MAT_FINAL_ASSEMBLY);
MatAssemblyEnd((*FD).ddx,MAT_FINAL_ASSEMBLY);

MatScale((*FD).ddx,1/(2*(1/9)));
[snip]
  DMCreateGlobalVector(da,&tmpvec2);
  VecSet(tmpvec2,1.0);
  VecAssemblyBegin(tmpvec2);
  VecAssemblyEnd(tmpvec2);
  DMCreateGlobalVector(da,&tmpvec3);
  VecSet(tmpvec3,1.0);
  VecAssemblyBegin(tmpvec3);
  VecAssemblyEnd(tmpvec3);
  MatView((*FD).ddx,PETSC_VIEWER_STDOUT_WORLD);
  VecView(tmpvec2,PETSC_VIEWER_STDOUT_WORLD);
  MatMult((*FD).ddx,tmpvec2,tmpvec3);
  VecView(tmpvec3,PETSC_VIEWER_STDOUT_WORLD);
  int tid,first,last;
  MPI_Comm_rank(PETSC_COMM_WORLD, &tid);
  sleep(1);
  MatGetOwnershipRange((*FD).ddx,&first,&last);
  printf("rank: %d,first: %d,last: %d\n",tid,first,last);

When running it on a single processor, everything works as expected, see attached file seqRes
However when running with 4 processors (mpirun -np 4 ./progname) I get the output in mpiRes. Notice that there really is a difference, its not just a surprising division of points between the processes - I checked this with PETSC_VIEWER_DRAW_WORLD. How come? I notice that although in the end each process postulates that it has 25 rows, the result of matview is
Matrix Object: 1 MPI processes
type: mpiaij
Is this OK? And if not, what am I doing wrong, presumably in the matrix allocation code?


---
yours sincerily
Frederik Treue

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111130/3ebb1b4a/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: seqRes
Type: application/octet-stream
Size: 3325 bytes
Desc: seqRes
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111130/3ebb1b4a/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mpiRes
Type: application/octet-stream
Size: 3478 bytes
Desc: mpiRes
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111130/3ebb1b4a/attachment-0001.obj>


More information about the petsc-users mailing list