Dear all,<br><br>I have a question regarding the utilization of the VecScatter routines to collect and distribute data from all processors to one, and vice versa. This in relation to file I/O. The setting is roughly as follows:<br>
<br>At a certain stage in my computation, I have computed in parallel some results. Let these results be in an array X (X is a native C or Fortran array, not a Petsc vector. X might be multidimensional as well). The Xs of all processors together constitute my global result, and I would like to write it to disk. However, X itself is of course only part of the total. So I need to grab from all processors the pieces of X into one single structure. <br>
Furthermore, the X's are in a Petsc ordering (1 ... n for processor 1, n+1 .... n2 for processor 2, etc) which does not reflect the ordering defined by the user. So before writing I need to permute the values of X accordingly.<br>
<br>My first thought is to use the VecScatter routines: define a parallel Petsc vector XVec, and see that the values of X are transferred to XVec (with VecGetArray and VecRestoreArray for example). I define a sequential vector XSeq as well. With VecScatterCreateToZero I define a scatter context, and I am able to get the distributed data into my vector XSeq. The data of XSeq is then written to disk by the zeroth processor. (Again using, for example, VecGetArray and VecRestoreArray to access the data.)<br>
<br>Though this is working, my second thought is if this is not too much overkill. And if this collecting and distributing can be done smarter or more elegantly. With two auxiliary vectors it requires quite some code to get some distributed data to disk. <br>
<br>Any thoughts and suggestions are much appreciated.<br><br>kind regards,<br><br>Wienand Drenth<br clear="all"><br>-- <br>Wienand Drenth PhD<br>Eindhoven, the Netherlands<br>