[mpich-discuss] [mpich2-dev] In direct memory block for handle type REQUEST, 8 handles are still allocated
Hisham Adel
hosham2004 at yahoo.com
Tue Sep 6 06:47:19 CDT 2011
How can i free these requests ?
For the data-structure that was created, I use MPI_Type_free(&MPIText_DataType), and I use MPI_Finalize() at the end of the program...
________________________________
From: Pavan Balaji <balaji at mcs.anl.gov>
To: Hisham Adel <hosham2004 at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Tuesday, September 6, 2011 1:40 PM
Subject: Re: [mpich2-dev] In direct memory block for handle type REQUEST, 8 handles are still allocated
[Dropping mpich2-dev from the cc list, as this is a user question, not a
developer question].
It sounds like your program is not freeing some MPI request that it created.
-- Pavan
On 09/06/2011 06:32 AM, Hisham Adel wrote:
> Hi,
>
> On Ubuntu 11.04 with gcc 4.5.2., I use mpich2-1.4.1p1 with this
> configuration options:
> ~/mpich2-1.4/configure --enable-error-messages=all --enable-g=all
> --enable-shared --enable-sharedlibs=gcc --without-mpe
> --with-pm=hydra:mpd --disable-f77 --disable-fc
>
> After the program terminates, I receive a message related to the memory,
> When I use MPI_Send.
>
> For example: The message bellow appears when I use 3 MPI_Send.
> The first and second MPI_Send are used to send two separate integers.
> The third one is used to send a special data structure created by me.
> At the end node, the received data are printed out well.
>
> How can I fix it ?
> I would be grateful if you can help me.
>
> Regards,
> Hisham
>
> In direct memory block for handle type REQUEST, 8 handles are still
> allocated
> In indirect memory block 0 for handle type REQUEST, 16 handles are still
> allocated
> [3] 1624 at [0x08930c98],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x08930bf0], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x08930b48], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 1624 at [0x08930450],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x089303a8], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x08930300], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 1624 at [0x0892fc08],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x0892fb60], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x0892fab8], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 1624 at [0x0892f3c0],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x0892f318], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x0892f270], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 1624 at [0x0892eb78],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x0892ead0], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x0892ea28], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 1624 at [0x0892e330],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x0890f948], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x0890f8a0], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 1624 at [0x0890f1a8],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x0890f100], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x0890f058], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 1624 at [0x0890e960],
> 1.4.1p1/src/mpid/ch3/src/ch3u_handle_recv_pkt.c[248]
> [3] 8 at [0x0890e8b8], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
> [3] 8 at [0x0890e810], am/mpich2-1.4.1p1/src/mpid/ch3/src/ch3u_eager.c[439]
>
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110906/0136307a/attachment.htm>
More information about the mpich-discuss
mailing list