<div class="gmail_quote">On Mon, Oct 17, 2011 at 15:17, Wienand Drenth <span dir="ltr"><<a href="mailto:w.drenth@gmail.com">w.drenth@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div id=":24e">I have a question regarding the utilization of the VecScatter routines to collect and distribute data from all processors to one, and vice versa. This in relation to file I/O. The setting is roughly as follows:<br>
<br>At a certain stage in my computation, I have computed in parallel some results. Let these results be in an array X (X is a native C or Fortran array, not a Petsc vector. X might be multidimensional as well). The Xs of all processors together constitute my global result, and I would like to write it to disk. However, X itself is of course only part of the total. So I need to grab from all processors the pieces of X into one single structure. <br>
Furthermore, the X's are in a Petsc ordering (1 ... n for processor 1, n+1 .... n2 for processor 2, etc) which does not reflect the ordering defined by the user. So before writing I need to permute the values of X accordingly.<br>
</div></blockquote><div><br></div><div>Simple solution:</div><div><br></div><div>Make a DMDA to represent your multi-dimensional layout, put your array values into the Vec you get from the DM, and call VecView(). It will do a parallel write and your vector will end up in the natural ordering. You can VecLoad() it later or read it with other software.</div>
</div>