[petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve

Zhenglun (Alan) Wei zhenglun.wei at gmail.com
Thu Sep 6 17:38:09 CDT 2012


Dear All,
      I hope you're having a nice day.
      I met a memory problem for MPI data communication. I guess here is 
a good place to ask this question since you guys are experts and may 
experienced the same problem before.
      I used the MPI derived data type (MPI_Type_contiguous, 
MPI_Type_vector and MPI_Type_indexed) to communicate data for a 
simulation of 3D problem. The communication is fine, as I checked every 
single data it sent and received. However, the problem is that the 
memory keeps increasing while communication. Therefore, I tested each of 
these three types. MPI_Type_contiguous does not have any problem; while 
MPI_Type_vector and MPI_Type_indexed have problem of memory 
accumulation. I tried to use MPI_Type_free, but it does not help. Have 
anyone experienced this problem before?
      Would this be related to the non-blocking MPI communication 
(MPI_Isend and MPI_Irecv). I have to use this non-blocking communication 
since the blocking communication is extremely slow when it has a lot of 
data involved in the communication.
      Is there any alternative way in PETSc that could do the similar 
work of MPI derived types?

thanks,
Alan


More information about the petsc-users mailing list