Thanks Barry. That makes perfect sense. <div><br></div><div>Best, </div><div>Mohamad</div><div><br><br><div class="gmail_quote">On Wed, Oct 12, 2011 at 3:50 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im"><br>
On Oct 12, 2011, at 5:42 PM, Mohamad M. Nasr-Azadani wrote:<br>
<br>
> Hi everyone,<br>
><br>
> I think I know the answer to my question, but I was double checking.<br>
> When using<br>
> PetscViewerBinaryOpen();<br>
><br>
> It is mentioned that<br>
> "For writing files it only opens the file on processor 0 in the communicator."<br>
><br>
> Does that mean when writing a parallel vector to file using VecView(), all the data from other processors is first sent to processor zero and then dumped into the file?<br>
<br>
</div> No all the data is not sent to process zero before writing. That is process 0 does not need enough memory to store all the data before writing.<br>
<br>
Instead the processes take turns sending data to process 0 who immediately writes it out out to disk.<br>
<div class="im"><br>
> If so, that would be a very slow processor for big datasets and large number of processor?<br>
<br>
</div> For less than a few thousand processes this is completely fine and nothing else would be much faster<br>
<div class="im"><br>
> Any suggestions to speed that process up?<br>
<br>
</div> We have the various MPI IO options that uses MPI IO to have several processes writing to disks at the same time that is useful for very large numbers of processes.<br>
<br>
Barry<br>
<br>
><br>
> Best,<br>
> Mohamad<br>
><br>
<br>
</blockquote></div><br></div>