[petsc-users] newbie question on the parallel allocation of matrices

Treue, Frederik frtr at risoe.dtu.dk
Fri Dec 2 03:32:17 CST 2011



From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley
Sent: Wednesday, November 30, 2011 5:05 PM
To: PETSc users list
Subject: Re: [petsc-users] newbie question on the parallel allocation of matrices

On Wed, Nov 30, 2011 at 9:59 AM, Treue, Frederik <frtr at risoe.dtu.dk<mailto:frtr at risoe.dtu.dk>> wrote:
Hi everyone,

Caveat: I have just started using petsc, so the answer to my question may very well be fairly trivial.

See SNES ex5<http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex5.c.html> for the right way to interact with the DMDA. We will preallocate the matrix for you and allow
you to set values using a stencil.

  Matt

OK, but that example seems to assume that you wish to connect only one matrix (the Jacobian) to a DA - I wish to specify many and I think I found this done in ksp ex39, is that example doing anything deprecated or will that work for me, e.g. with the various basic mat routines (matmult, matAXPY etc.) in a multiprocessor setup?



I'm trying to run the following bits of code:

DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, DMDA_BOUNDARY_GHOSTED, DMDA_STENCIL_BOX,10,10,PETSC_DECIDE,PETSC_DECIDE,1,1,PETSC_NULL,PETSC_NULL,&da);
[snip]
MatCreate(PETSC_COMM_WORLD,&((*FD).ddx));
  MatSetSizes((*FD).ddx,PETSC_DECIDE,PETSC_DECIDE,100,100);
  MatSetFromOptions((*FD).ddx);

  for (i=0;i<10;i++) {
    col[0]=i*10;col[1]=i*10+1; row[0]=i*10;
    val[0]=1;val[1]=1;
    MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);
    for (j=1;j<10-1;j++) {
      col[0]=i*10+j-1;col[1]=i*10+j+1; row[0]=i*10+j;
      val[0]=-1;val[1]=1;
      MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);
    }
    col[0]=i*10+10-2;col[1]=i*10+10-1; row[0]=i*10+10-1;
    val[0]=-1;val[1]=-1;
    MatSetValues((*FD).ddx,1,row,2,col,val,INSERT_VALUES);
  }
  MatAssemblyBegin((*FD).ddx,MAT_FINAL_ASSEMBLY);
MatAssemblyEnd((*FD).ddx,MAT_FINAL_ASSEMBLY);

MatScale((*FD).ddx,1/(2*(1/9)));
[snip]
  DMCreateGlobalVector(da,&tmpvec2);
  VecSet(tmpvec2,1.0);
  VecAssemblyBegin(tmpvec2);
  VecAssemblyEnd(tmpvec2);
  DMCreateGlobalVector(da,&tmpvec3);
  VecSet(tmpvec3,1.0);
  VecAssemblyBegin(tmpvec3);
  VecAssemblyEnd(tmpvec3);
  MatView((*FD).ddx,PETSC_VIEWER_STDOUT_WORLD);
  VecView(tmpvec2,PETSC_VIEWER_STDOUT_WORLD);
  MatMult((*FD).ddx,tmpvec2,tmpvec3);
  VecView(tmpvec3,PETSC_VIEWER_STDOUT_WORLD);
  int tid,first,last;
  MPI_Comm_rank(PETSC_COMM_WORLD, &tid);
  sleep(1);
  MatGetOwnershipRange((*FD).ddx,&first,&last);
  printf("rank: %d,first: %d,last: %d\n",tid,first,last);

When running it on a single processor, everything works as expected, see attached file seqRes
However when running with 4 processors (mpirun -np 4 ./progname) I get the output in mpiRes. Notice that there really is a difference, its not just a surprising division of points between the processes - I checked this with PETSC_VIEWER_DRAW_WORLD. How come? I notice that although in the end each process postulates that it has 25 rows, the result of matview is
Matrix Object: 1 MPI processes
type: mpiaij
Is this OK? And if not, what am I doing wrong, presumably in the matrix allocation code?


---
yours sincerily
Frederik Treue




--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111202/bd923a48/attachment-0001.htm>


More information about the petsc-users mailing list