[petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve
Zhenglun (Alan) Wei
zhenglun.wei at gmail.com
Fri Sep 7 16:50:38 CDT 2012
Dear folks,
I did more tests, since I want to make sure where I'm wrong.
As Dr. Smith suggested, I tested my code using OpenMPI and MPICH.
Both of them have the memory accumulation problem. Therefore, I suppose
there is a bug in my code. I went into the code, and changed the
non-blocking MPI communication to blocking one. The memory accumulation
problem is just gone by itself. However, I have to change it back since
the blocking MPI communication does not allow me to do massive data
communication. Now, I'm searching for related topics on non-blocking MPI
communication.
Here I cut off those unrelated part of my code and attach the
communication part here. Could anyone help me to briefly check if there
is any obvious mistake I made in the program? After unzip the file,
'./AlanRun' will execute the program.
I really appreciate your help :)
Alan
On 9/6/2012 5:56 PM, Jed Brown wrote:
>
> Numeric data that the solver sees should be stored in Vecs. You can
> put other scalars in Vecs if you like.
>
> On Sep 6, 2012 5:48 PM, "Zhenglun (Alan) Wei" <zhenglun.wei at gmail.com
> <mailto:zhenglun.wei at gmail.com>> wrote:
>
> Dear Dr. Brown,
> I'm not quite familiar with VecScatter. I just read its
> explanation; it seems requires that my data is stored as a form of
> vectors (is it the vector in PETSc?). However, my data are stored
> as arrays in C program.
> Is that any problem in MPI or it is likely a problem of my code?
>
> thanks,
> Alan
> On 9/6/2012 5:44 PM, Jed Brown wrote:
>>
>> Are you familiar with VecScatter?
>>
>> On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei"
>> <zhenglun.wei at gmail.com <mailto:zhenglun.wei at gmail.com>> wrote:
>>
>> Dear All,
>> I hope you're having a nice day.
>> I met a memory problem for MPI data communication. I
>> guess here is a good place to ask this question since you
>> guys are experts and may experienced the same problem before.
>> I used the MPI derived data type (MPI_Type_contiguous,
>> MPI_Type_vector and MPI_Type_indexed) to communicate data for
>> a simulation of 3D problem. The communication is fine, as I
>> checked every single data it sent and received. However, the
>> problem is that the memory keeps increasing while
>> communication. Therefore, I tested each of these three types.
>> MPI_Type_contiguous does not have any problem; while
>> MPI_Type_vector and MPI_Type_indexed have problem of memory
>> accumulation. I tried to use MPI_Type_free, but it does not
>> help. Have anyone experienced this problem before?
>> Would this be related to the non-blocking MPI
>> communication (MPI_Isend and MPI_Irecv). I have to use this
>> non-blocking communication since the blocking communication
>> is extremely slow when it has a lot of data involved in the
>> communication.
>> Is there any alternative way in PETSc that could do the
>> similar work of MPI derived types?
>>
>> thanks,
>> Alan
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120907/29eaacda/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: V1.13_CommTEST.zip
Type: application/x-zip-compressed
Size: 6930 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120907/29eaacda/attachment.bin>
More information about the petsc-users
mailing list