[petsc-users] scattering strategies
Wienand Drenth
w.drenth at gmail.com
Mon Oct 17 15:17:21 CDT 2011
Dear all,
I have a question regarding the utilization of the VecScatter routines to
collect and distribute data from all processors to one, and vice versa. This
in relation to file I/O. The setting is roughly as follows:
At a certain stage in my computation, I have computed in parallel some
results. Let these results be in an array X (X is a native C or Fortran
array, not a Petsc vector. X might be multidimensional as well). The Xs of
all processors together constitute my global result, and I would like to
write it to disk. However, X itself is of course only part of the total. So
I need to grab from all processors the pieces of X into one single
structure.
Furthermore, the X's are in a Petsc ordering (1 ... n for processor 1, n+1
.... n2 for processor 2, etc) which does not reflect the ordering defined by
the user. So before writing I need to permute the values of X accordingly.
My first thought is to use the VecScatter routines: define a parallel Petsc
vector XVec, and see that the values of X are transferred to XVec (with
VecGetArray and VecRestoreArray for example). I define a sequential vector
XSeq as well. With VecScatterCreateToZero I define a scatter context, and I
am able to get the distributed data into my vector XSeq. The data of XSeq is
then written to disk by the zeroth processor. (Again using, for example,
VecGetArray and VecRestoreArray to access the data.)
Though this is working, my second thought is if this is not too much
overkill. And if this collecting and distributing can be done smarter or
more elegantly. With two auxiliary vectors it requires quite some code to
get some distributed data to disk.
Any thoughts and suggestions are much appreciated.
kind regards,
Wienand Drenth
--
Wienand Drenth PhD
Eindhoven, the Netherlands
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111017/0eaf3158/attachment.htm>
More information about the petsc-users
mailing list