[petsc-users] Binary I/O
Mohamad M. Nasr-Azadani
mmnasr at gmail.com
Wed Oct 12 18:17:11 CDT 2011
Thanks Barry. That makes perfect sense.
Best,
Mohamad
On Wed, Oct 12, 2011 at 3:50 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> On Oct 12, 2011, at 5:42 PM, Mohamad M. Nasr-Azadani wrote:
>
> > Hi everyone,
> >
> > I think I know the answer to my question, but I was double checking.
> > When using
> > PetscViewerBinaryOpen();
> >
> > It is mentioned that
> > "For writing files it only opens the file on processor 0 in the
> communicator."
> >
> > Does that mean when writing a parallel vector to file using VecView(),
> all the data from other processors is first sent to processor zero and then
> dumped into the file?
>
> No all the data is not sent to process zero before writing. That is
> process 0 does not need enough memory to store all the data before writing.
>
> Instead the processes take turns sending data to process 0 who
> immediately writes it out out to disk.
>
> > If so, that would be a very slow processor for big datasets and large
> number of processor?
>
> For less than a few thousand processes this is completely fine and
> nothing else would be much faster
>
> > Any suggestions to speed that process up?
>
> We have the various MPI IO options that uses MPI IO to have several
> processes writing to disks at the same time that is useful for very large
> numbers of processes.
>
> Barry
>
> >
> > Best,
> > Mohamad
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111012/1220388d/attachment-0001.htm>
More information about the petsc-users
mailing list