<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Dear folks, <br>
I did more tests, since I want to make sure where I'm wrong.
<br>
As Dr. Smith suggested, I tested my code using OpenMPI and
MPICH. Both of them have the memory accumulation problem.
Therefore, I suppose there is a bug in my code. I went into the
code, and changed the non-blocking MPI communication to blocking
one. The memory accumulation problem is just gone by itself.
However, I have to change it back since the blocking MPI
communication does not allow me to do massive data communication.
Now, I'm searching for related topics on non-blocking MPI
communication. <br>
Here I cut off those unrelated part of my code and attach the
communication part here. Could anyone help me to briefly check if
there is any obvious mistake I made in the program? After unzip
the file, './AlanRun' will execute the program. <br>
<br>
I really appreciate your help :)<br>
Alan<br>
<br>
<br>
On 9/6/2012 5:56 PM, Jed Brown wrote:<br>
</div>
<blockquote
cite="mid:CAM9tzSnsA6mWvqVhdD44Z1Ya5JbbtZyQ9zWDWsaWnhSExU8mhA@mail.gmail.com"
type="cite">
<p>Numeric data that the solver sees should be stored in Vecs. You
can put other scalars in Vecs if you like.</p>
<div class="gmail_quote">On Sep 6, 2012 5:48 PM, "Zhenglun (Alan)
Wei" <<a moz-do-not-send="true"
href="mailto:zhenglun.wei@gmail.com">zhenglun.wei@gmail.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>Dear Dr. Brown,<br>
I'm not quite familiar with VecScatter. I just read
its explanation; it seems requires that my data is stored
as a form of vectors (is it the vector in PETSc?).
However, my data are stored as arrays in C program. <br>
Is that any problem in MPI or it is likely a problem
of my code?<br>
<br>
thanks,<br>
Alan <br>
On 9/6/2012 5:44 PM, Jed Brown wrote:<br>
</div>
<blockquote type="cite">
<p>Are you familiar with VecScatter?</p>
<div class="gmail_quote">On Sep 6, 2012 5:38 PM, "Zhenglun
(Alan) Wei" <<a moz-do-not-send="true"
href="mailto:zhenglun.wei@gmail.com" target="_blank">zhenglun.wei@gmail.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
Dear All,<br>
I hope you're having a nice day.<br>
I met a memory problem for MPI data
communication. I guess here is a good place to ask
this question since you guys are experts and may
experienced the same problem before.<br>
I used the MPI derived data type
(MPI_Type_contiguous, MPI_Type_vector and
MPI_Type_indexed) to communicate data for a simulation
of 3D problem. The communication is fine, as I checked
every single data it sent and received. However, the
problem is that the memory keeps increasing while
communication. Therefore, I tested each of these three
types. MPI_Type_contiguous does not have any problem;
while MPI_Type_vector and MPI_Type_indexed have
problem of memory accumulation. I tried to use
MPI_Type_free, but it does not help. Have anyone
experienced this problem before?<br>
Would this be related to the non-blocking MPI
communication (MPI_Isend and MPI_Irecv). I have to use
this non-blocking communication since the blocking
communication is extremely slow when it has a lot of
data involved in the communication.<br>
Is there any alternative way in PETSc that could
do the similar work of MPI derived types?<br>
<br>
thanks,<br>
Alan<br>
</blockquote>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>