[petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve

Jed Brown jedbrown at mcs.anl.gov
Fri Sep 7 21:20:26 CDT 2012


   if(!localBC.tBC) {
      MPI_Isend(&_TestV[0][_Index.tcEnd-1][0], 1, columntype, NbrRank.t,
SendTag.T1st, PETSC_COMM_WORLD, &request);
      MPI_Isend(&_TestV[0][_Index.tcEnd][0], 1, columntype, NbrRank.t,
SendTag.T2nd, PETSC_COMM_WORLD, &request);
    }

    if(!localBC.bBC) {
      MPI_Isend(&_TestV[0][_Index.bcStr+1][0], 1, columntype, NbrRank.b,
SendTag.B1st, PETSC_COMM_WORLD, &request);
      MPI_Isend(&_TestV[0][_Index.bcStr][0], 1, columntype, NbrRank.b,
SendTag.B2nd, PETSC_COMM_WORLD, &request);
    }

    MPI_Barrier(PETSC_COMM_WORLD);
    printf("Rank = %d finished sending!!!\n", rank);

    if(!localBC.tBC) {
      MPI_Irecv(&_TestV[0][_Index.tbStr+1][0], 1, columntype, NbrRank.t,
RecvTag.T2nd, PETSC_COMM_WORLD, &request);
      MPI_Irecv(&_TestV[0][_Index.tbStr][0], 1, columntype,  NbrRank.t,
RecvTag.T1st, PETSC_COMM_WORLD, &request);
    }

    if(!localBC.bBC) {
      MPI_Irecv(&_TestV[0][_Index.bbEnd-1][0], 1, columntype, NbrRank.b,
RecvTag.B2nd, PETSC_COMM_WORLD, &request);
      MPI_Irecv(&_TestV[0][_Index.bbEnd][0], 1, columntype, NbrRank.b,
RecvTag.B1st, PETSC_COMM_WORLD, &request);
    }

    MPI_Wait(&request, &status);


You are creating far more requests than you are waiting on. You need to
keep track of *every* request and eventually wait on all of them.

It is generally better for performance to post the receives first, then
post the sends, then MPI_Waitall() on all the requests.

On Fri, Sep 7, 2012 at 4:50 PM, Zhenglun (Alan) Wei
<zhenglun.wei at gmail.com>wrote:

>  Dear folks,
>      I did more tests, since I want to make sure where I'm wrong.
>      As Dr. Smith suggested, I tested my code using OpenMPI and MPICH.
> Both of them have the memory accumulation problem. Therefore, I suppose
> there is a bug in my code. I went into the code, and changed the
> non-blocking MPI communication to blocking one. The memory accumulation
> problem is just gone by itself. However, I have to change it back since the
> blocking MPI communication does not allow me to do massive data
> communication. Now, I'm searching for related topics on non-blocking MPI
> communication.
>      Here I cut off those unrelated part of my code and attach the
> communication part here. Could anyone help me to briefly check if there is
> any obvious mistake I made in the program? After unzip the file,
> './AlanRun' will execute the program.
>
> I really appreciate your help :)
> Alan
>
>
>
> On 9/6/2012 5:56 PM, Jed Brown wrote:
>
> Numeric data that the solver sees should be stored in Vecs. You can put
> other scalars in Vecs if you like.
> On Sep 6, 2012 5:48 PM, "Zhenglun (Alan) Wei" <zhenglun.wei at gmail.com>
> wrote:
>
>>  Dear Dr. Brown,
>>      I'm not quite familiar with VecScatter. I just read its explanation;
>> it seems requires that my data is stored as a form of vectors (is it the
>> vector in PETSc?). However, my data are stored as arrays in C program.
>>      Is that any problem in MPI or it is likely a problem of my code?
>>
>> thanks,
>> Alan
>> On 9/6/2012 5:44 PM, Jed Brown wrote:
>>
>> Are you familiar with VecScatter?
>> On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" <zhenglun.wei at gmail.com>
>> wrote:
>>
>>> Dear All,
>>>      I hope you're having a nice day.
>>>      I met a memory problem for MPI data communication. I guess here is
>>> a good place to ask this question since you guys are experts and may
>>> experienced the same problem before.
>>>      I used the MPI derived data type (MPI_Type_contiguous,
>>> MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation
>>> of 3D problem. The communication is fine, as I checked every single data it
>>> sent and received. However, the problem is that the memory keeps increasing
>>> while communication. Therefore, I tested each of these three types.
>>> MPI_Type_contiguous does not have any problem; while MPI_Type_vector and
>>> MPI_Type_indexed have problem of memory accumulation. I tried to use
>>> MPI_Type_free, but it does not help. Have anyone experienced this problem
>>> before?
>>>      Would this be related to the non-blocking MPI communication
>>> (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication
>>> since the blocking communication is extremely slow when it has a lot of
>>> data involved in the communication.
>>>      Is there any alternative way in PETSc that could do the similar
>>> work of MPI derived types?
>>>
>>> thanks,
>>> Alan
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120907/3066f2ad/attachment-0001.html>


More information about the petsc-users mailing list