Some questions about the parallel implementation of PETSc

Ben Tay zonexo at gmail.com
Fri Jan 12 20:46:06 CST 2007


Hi,

I've the sequential version of PETSc working and I have some questions about
the parallel implementation.

My code is to solve the NS eqn. Mainly it has to solve 2 linear eqns -
momentum & poisson. The poisson eqn solving takes the most % of time so it
going to be solved in parallel. Most likely the momentum eqn 'll be as well.
The other parts of the code may be parallelized later using MPI.



The parallel part is coded as follows:

call MatCreate(PETSC_COMM_WORLD,A,ierr)

call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m*n,m*n,ierr)

call MatSetFromOptions(A,ierr)

call MatGetOwnershipRange(A,Istart,Iend,ierr)

call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,m*n,b,ierr)

call VecSetFromOptions(b,ierr)

call VecDuplicate(b,x,ierr)

... insert matrix in parallel making use of Istart,Iend.

... assembly

... solve

 Since I've used PETSC_DECIDE, PETSc will determine the local dimension of A
and b and x automatically.

 Is the no. of rows of A and b and x (Iend-Istart+1) locally?

After the eqn is solved, I need to obtain the answer x

I've used:


call VecGetArray(x,ppv,i_vec,ierr)

 do j=1,size_y

     do i=1,size_x

        k=(j-1)*size_x+i

        p(i,j)=ppv(k+i_vec)

     end do

 end do

 call VecRestoreArray(x,ppv,i_vec,ierr)

I'm trying to get the address of the 1st value of x and map the values onto
a rectangular grid to update p. Is this correct?

Or am I only able to get the local value, as in the local value of x. Do I
need to use some MPI routines if I need to update the p values on the whole
global grid?

thanks alot!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070113/046e0d1b/attachment.htm>


More information about the petsc-users mailing list