[petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve

Zhenglun (Alan) Wei zhenglun.wei at gmail.com
Thu Sep 6 17:53:25 CDT 2012


Dear Dr. Smith,
      What I used is the MPICH. I will try OpenMPI to see if there is 
any problem. Thank you so much for the advice.

cheers,
Alan
On 9/6/2012 5:51 PM, Barry Smith wrote:
>     First I would try another MPI implementation. Do you get the exact same problem with MPICH and OpenMPI? Then likely it is an issue with your code, if only one has problems then it is an MPI implementation issue.
>
>     Barry
>
>
> On Sep 6, 2012, at 5:38 PM, "Zhenglun (Alan) Wei" <zhenglun.wei at gmail.com> wrote:
>
>> Dear All,
>>      I hope you're having a nice day.
>>      I met a memory problem for MPI data communication. I guess here is a good place to ask this question since you guys are experts and may experienced the same problem before.
>>      I used the MPI derived data type (MPI_Type_contiguous, MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation of 3D problem. The communication is fine, as I checked every single data it sent and received. However, the problem is that the memory keeps increasing while communication. Therefore, I tested each of these three types. MPI_Type_contiguous does not have any problem; while MPI_Type_vector and MPI_Type_indexed have problem of memory accumulation. I tried to use MPI_Type_free, but it does not help. Have anyone experienced this problem before?
>>      Would this be related to the non-blocking MPI communication (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication since the blocking communication is extremely slow when it has a lot of data involved in the communication.
>>      Is there any alternative way in PETSc that could do the similar work of MPI derived types?
>>
>> thanks,
>> Alan



More information about the petsc-users mailing list