[petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve

Jed Brown jedbrown at mcs.anl.gov
Thu Sep 6 17:44:11 CDT 2012


Are you familiar with VecScatter?
On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" <zhenglun.wei at gmail.com>
wrote:

> Dear All,
>      I hope you're having a nice day.
>      I met a memory problem for MPI data communication. I guess here is a
> good place to ask this question since you guys are experts and may
> experienced the same problem before.
>      I used the MPI derived data type (MPI_Type_contiguous,
> MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation
> of 3D problem. The communication is fine, as I checked every single data it
> sent and received. However, the problem is that the memory keeps increasing
> while communication. Therefore, I tested each of these three types.
> MPI_Type_contiguous does not have any problem; while MPI_Type_vector and
> MPI_Type_indexed have problem of memory accumulation. I tried to use
> MPI_Type_free, but it does not help. Have anyone experienced this problem
> before?
>      Would this be related to the non-blocking MPI communication
> (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication
> since the blocking communication is extremely slow when it has a lot of
> data involved in the communication.
>      Is there any alternative way in PETSc that could do the similar work
> of MPI derived types?
>
> thanks,
> Alan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120906/1679b3e1/attachment.html>


More information about the petsc-users mailing list