[petsc-users] using PETSc commands in parallel for loop

Reza Madankan rm93 at buffalo.edu
Thu Jan 5 19:27:58 CST 2012


Hello;
I had a quick question about using PETSc inside parallel for loop in C
language.
In more detail, I have couple of lines of matrix algebra which is written
by using PETSc inside a for loop that I would like to parallelize it.
Here is the code that I have written:

MPI_Comm_size(MPI_COMM_WORLD,&Np);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
for (j=myid*(nw/Np);j<(myid+1)*(nw/Np);j++)
{
         MatCreate(PETSC_COMM_WORLD,&Ypcq);
         MatSetSizes(Ypcq,PETSC_DECIDE,PETSC_DECIDE,ns*tindex_f,1);
         MatSetFromOptions(Ypcq);
         for (k=0; k<ns*tindex_f; ++k)
         {
              tmp=*(Ymat +j*nstindexf + k);
              MatSetValues(Ypcq,1,&k,1,&col,&tmp,INSERT_VALUES);
         }
         MatAssemblyBegin(Ypcq,MAT_FINAL_ASSEMBLY);
         MatAssemblyEnd(Ypcq,MAT_FINAL_ASSEMBLY);
         MatAXPY(Ypcq,-1,Mean,DIFFERENT_NONZERO_PATTERN);
         // Evaluation of Ypcq Transpose:
         MatTranspose(Ypcq,MAT_INITIAL_MATRIX,&YpcqT);
         MatAssemblyBegin(YpcqT,MAT_FINAL_ASSEMBLY);
         MatAssemblyEnd(YpcqT,MAT_FINAL_ASSEMBLY);

         MatCreate(PETSC_COMM_WORLD,&Para);
         MatSetSizes(Para,PETSC_DECIDE,PETSC_DECIDE,np,1);
         MatSetFromOptions(Para);

         for (i=0; i<np; ++i)
         {
              MatSetValues(Para,1,&i,1,&col,&rhsp[np*j+i],INSERT_VALUES);
         }
         MatAssemblyBegin(Para,MAT_FINAL_ASSEMBLY);
         MatAssemblyEnd(Para,MAT_FINAL_ASSEMBLY);
         MatAXPY(Para,-1,XYmat,DIFFERENT_NONZERO_PATTERN);
         MatAssemblyBegin(Para,MAT_FINAL_ASSEMBLY);
         MatAssemblyEnd(Para,MAT_FINAL_ASSEMBLY);
         MatMatMult(Para,YpcqT,MAT_INITIAL_MATRIX,PETSC_DEFAULT,&InnProd);
         MatAssemblyBegin(InnProd,MAT_FINAL_ASSEMBLY);
         MatAssemblyEnd(InnProd,MAT_FINAL_ASSEMBLY);

         MatAssemblyBegin(Pzy,MAT_FINAL_ASSEMBLY);
         MatAssemblyEnd(Pzy,MAT_FINAL_ASSEMBLY);
         MatAXPY(Pzy,rhsw[j],InnProd,DIFFERENT_NONZERO_PATTERN);
         MatAssemblyBegin(Pzy,MAT_FINAL_ASSEMBLY);
         MatAssemblyEnd(Pzy,MAT_FINAL_ASSEMBLY);

         MatDestroy(&InnProd);
         MatDestroy(&Ypcq);
         MatDestroy(&YpcqT);
         MatDestroy(&Para);
}
MPI_Reduce(&Pzy,&Pzy,1,MPI_DOUBLE,MPI_SUM,0,MPI_COMM_WORLD);
MatView(Pzy,PETSC_VIEWER_STDOUT_WORLD);

I am trying to parallelize this loop using MPI, but I don't get the right
result. I appreciate if anyone could help me on that?

Thanks in advance
Reza
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120105/7d4fb4d3/attachment.htm>


More information about the petsc-users mailing list