<div dir="ltr">On Mon, Jun 17, 2013 at 2:47 PM, Frederik Treue <span dir="ltr"><<a href="mailto:frtr@fysik.dtu.dk" target="_blank">frtr@fysik.dtu.dk</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">OK, so I got a little further: Now I can read 1D fields on any amount of<br>
processors, and 2D fields on 1 processor :). My code:<br></blockquote><div><br></div><div style>What are you trying to do? Why not just use VecView() and VecLoad(), which</div><div style>work in parallel, and are scalable. I doubt you want to reproduce that code.</div>
<div style><br></div><div style> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
MPI_Comm_size(PETSC_COMM_WORLD,&cs);<br>
MPI_Comm_rank(MPI_COMM_WORLD,&rank);<br>
<br>
ierr = VecGetArray(dummy,&dump); CHKERRQ(ierr);<br>
ierr = VecGetArray(ReRead,&readarray); CHKERRQ(ierr);<br>
ierr=PetscViewerBinaryOpen(PETSC_COMM_WORLD,"./testvector",FILE_MODE_READ,&fileview);CHKERRQ(ierr);<br>
<br>
for (i=0;i<rank;i++) {<br>
printf("Rank BEFORE: %d\n",rank);<br>
ierr=PetscViewerBinaryRead(fileview,(void*)dump,nx*ny/cs,PETSC_SCALAR);CHKERRQ(ierr);<br>
}<br>
<br>
ierr=PetscViewerBinaryRead(fileview,(void*)readarray,nx*ny/cs,PETSC_SCALAR);CHKERRQ(ierr);<br>
<br>
for (i=rank+1;i<cs;i++) {<br>
printf("Rank: AFTER: %d\n",rank);<br>
ierr=PetscViewerBinaryRead(fileview,(void*)dump,nx*ny/cs,PETSC_SCALAR);CHKERRQ(ierr);<br>
}<br>
<br>
However, this fails for 2D with more than one processor: The resulting<br>
vector is garbled and I get memory corruption. Am I on the right track,<br>
or is there another way to achieve an MPI version of binary read? The<br>
above code seems somewhat cumbersome...<br>
<br>
/Frederik Treue<br>
<br>
On Mon, 2013-06-17 at 11:06 +0200, Matthew Knepley wrote:<br>
> On Mon, Jun 17, 2013 at 10:59 AM, Frederik Treue <<a href="mailto:frtr@fysik.dtu.dk">frtr@fysik.dtu.dk</a>><br>
> wrote:<br>
> Hi guys,<br>
><br>
> is petscviewerbinaryread working? The examples given at the<br>
> webpage<br>
> either fails with out of memory error issues (ex65 and ex65dm)<br>
> or<br>
> doesn't compile (ex61) for me?<br>
><br>
> Oddly I have problems only when trying to use mpi. My code:<br>
><br>
> ierr=DMDACreate1d(PETSC_COMM_WORLD,DMDA_BOUNDARY_GHOSTED,156,1,1,PETSC_NULL,&da);CHKERRQ(ierr);<br>
><br>
> ierr=DMCreateGlobalVector(da,&ReRead);CHKERRQ(ierr);<br>
><br>
> ierr=VecAssemblyBegin(ReRead);CHKERRQ(ierr);<br>
> ierr=VecAssemblyEnd(ReRead);CHKERRQ(ierr);<br>
><br>
> double *workarray=(double*)malloc(156*sizeof(double));<br>
><br>
> ierr = VecGetArray(ReRead,&workarray); CHKERRQ(ierr);<br>
><br>
><br>
> 1) This does not make sense. You allocate an array, but then overwrite<br>
> that<br>
> array with the one inside the vector ReRead.<br>
><br>
> ierr=PetscViewerBinaryOpen(PETSC_COMM_WORLD,"./testvector",FILE_MODE_READ,&fileview);CHKERRQ(ierr);<br>
><br>
> ierr=PetscViewerBinaryRead(fileview,&dummy,1,PETSC_SCALAR);CHKERRQ(ierr);<br>
><br>
> ierr=PetscViewerBinaryRead(fileview,(void*)workarray,156,PETSC_SCALAR);CHKERRQ(ierr);<br>
> printf("TEST: %g\n",workarray[144]);<br>
> ierr=VecRestoreArray(ReRead,&workarray);<br>
><br>
><br>
> 2) In parallel, the local array in your vector ReRead will be smaller<br>
> than the global size 156. Thus this read also<br>
> does not makes sense.<br>
><br>
><br>
> Matt<br>
><br>
> VecView(ReRead,PETSC_VIEWER_DRAW_WORLD);<br>
><br>
> This works fine as long as I'm on a single processor. The file<br>
> read also<br>
> works on mpi with 2 processors, as evidenced by the result of<br>
> the printf<br>
> statement (which gives the right result from both processors).<br>
> However<br>
> the VecView statement fails with "Vector not generated from a<br>
> DMDA!".<br>
> Why?<br>
><br>
> And second question: How do I generalize to 2d? Can I give a<br>
> 2d array to<br>
> PetscViewerBinaryRead and expect it to work? And how should it<br>
> be<br>
> malloc'ed? Or does one give a 1d array to<br>
> PetscViewerBinaryRead and let<br>
> VecRestoreArray do the rest? Or what?<br>
><br>
> /Frederik Treue<br>
><br>
><br>
><br>
><br>
<span class="HOEnZb"><font color="#888888">><br>
> --<br>
> What most experimenters take for granted before they begin their<br>
> experiments is infinitely more interesting than any results to which<br>
> their experiments lead.<br>
> -- Norbert Wiener<br>
<br>
<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>