On Thu, Jun 9, 2011 at 6:18 PM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
On Thu, Jun 9, 2011 at 6:01 PM, <span dir="ltr"><<a href="mailto:zhenglun.wei@gmail.com" target="_blank">zhenglun.wei@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Dear Sir/Madam,<br> I'm still studying on the ex29 of /src/ksp/ksp/example/tutorial. Before I met a problem on VecView_VTK in parallel computation. I'm trying to modify it in order to output some data from the computation. <br>
</blockquote></div></blockquote><div><br></div><div>Here is a better answer. If you want output, throw away this old function which is broken, and use</div><div><br></div><div> PetscViewerASCIIOpen()</div><div> PetscViewerASCIISetFormat(PETSC_VIEWER_ASCII_VTK)</div>
<div> DMView()</div><div> VecView()</div><div> PetscViewerDestroy()</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">1) My first questions is that what does this section do in VecView_VTK:<br><br>272: MPI_Comm_rank(comm, &rank);<br>
273: MPI_Comm_size(comm, &size);<br>274: MPI_Reduce(&n, &maxn, 1, MPIU_INT, MPI_MAX, 0, comm);<br>
275: tag = ((PetscObject) viewer)->tag;<br>276: if (!rank) {<br>277: PetscMalloc((maxn+1) * sizeof(PetscScalar), &values);<br>278: for(i = 0; i < n; i++) {<br>279: PetscViewerASCIIPrintf(viewer, "%G\n", PetscRealPart(array[i]));<br>
280: }<br>281: for(p = 1; p < size; p++) {<br>282: MPI_Recv(values, (PetscMPIInt) n, MPIU_SCALAR, p, tag, comm, &status);<br>283: MPI_Get_count(&status, MPIU_SCALAR, &nn);<br>284: for(i = 0; i < nn; i++) {<br>
285: PetscViewerASCIIPrintf(viewer, "%G\n", PetscRealPart(array[i]));<br>286: }<br>287: }<br>288: PetscFree(values);<br>289: } else {<br>290: MPI_Send(array, n, MPIU_SCALAR, 0, tag, comm);<br>
291: }<br><br> What I understand is: it gather all the data from different process in parallel computation, and output it to the 'viewer'. I comment out everything in VecView_VTK except this part, there is no error message coming up in my parallel computation so far. <br>
<br>2) however, I really don't know how did it split the domain for parallel computation. For example, if I use 4 processes, is the domain split like:<br></blockquote><div><br></div><div>The DMDA describes the domain splitting.</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> a) <br> |<br> 0 | 1 <br> |<br>
---------------|--------------<br> | <br> 2 | 3<br> |<br><br> b) <br> |<br> 0 | 2 <br> |<br>---------------|--------------<br> | <br>
1 | 3<br> |<br><br> c) <br> | | | |<br> | | | |<br> 0 | | 2 | |<br> | 1 | | 3 |<br> | | | |<br> | | | |<br>
<br>d)<br> 0<br>------------------------<br> 1<br>------------------------<br> 2<br>------------------------<br> 3<br><br>thanks in advance,<br>Alan</blockquote></div><br><font color="#888888"><br clear="all">
<br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>
</font></blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>