<p>Numeric data that the solver sees should be stored in Vecs. You can put other scalars in Vecs if you like.</p>
<div class="gmail_quote">On Sep 6, 2012 5:48 PM, "Zhenglun (Alan) Wei" <<a href="mailto:zhenglun.wei@gmail.com">zhenglun.wei@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>Dear Dr. Brown,<br>
I'm not quite familiar with VecScatter. I just read its
explanation; it seems requires that my data is stored as a form of
vectors (is it the vector in PETSc?). However, my data are stored
as arrays in C program. <br>
Is that any problem in MPI or it is likely a problem of my
code?<br>
<br>
thanks,<br>
Alan <br>
On 9/6/2012 5:44 PM, Jed Brown wrote:<br>
</div>
<blockquote type="cite">
<p>Are you familiar with VecScatter?</p>
<div class="gmail_quote">On Sep 6, 2012 5:38 PM, "Zhenglun (Alan)
Wei" <<a href="mailto:zhenglun.wei@gmail.com" target="_blank">zhenglun.wei@gmail.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Dear All,<br>
I hope you're having a nice day.<br>
I met a memory problem for MPI data communication. I
guess here is a good place to ask this question since you guys
are experts and may experienced the same problem before.<br>
I used the MPI derived data type (MPI_Type_contiguous,
MPI_Type_vector and MPI_Type_indexed) to communicate data for
a simulation of 3D problem. The communication is fine, as I
checked every single data it sent and received. However, the
problem is that the memory keeps increasing while
communication. Therefore, I tested each of these three types.
MPI_Type_contiguous does not have any problem; while
MPI_Type_vector and MPI_Type_indexed have problem of memory
accumulation. I tried to use MPI_Type_free, but it does not
help. Have anyone experienced this problem before?<br>
Would this be related to the non-blocking MPI
communication (MPI_Isend and MPI_Irecv). I have to use this
non-blocking communication since the blocking communication is
extremely slow when it has a lot of data involved in the
communication.<br>
Is there any alternative way in PETSc that could do the
similar work of MPI derived types?<br>
<br>
thanks,<br>
Alan<br>
</blockquote>
</div>
</blockquote>
<br>
</div>
</blockquote></div>