[mpich-discuss] [mpich2-maint] #428:MPI_Win_fencememoryconsumption

Jayesh Krishna jayesh at mcs.anl.gov
Wed Mar 4 17:29:07 CST 2009


We are looking at the code to see if we missed anything... will keep you
posted...
 
Regards,
jayesh

  _____  

From: mpich-discuss-bounces at mcs.anl.gov
[mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Jayesh Krishna
Sent: Wednesday, March 04, 2009 3:16 PM
To: 'Dorian Krause'
Cc: mpich2-maint at mcs.anl.gov; mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] [mpich2-maint]
#428:MPI_Win_fencememoryconsumption



Hi,

>> But the memory layout of an array of double3 and...
        Any *tricks* like the one you tried would make your code
non-portable.

>> Is MPI_Type_create_struct able to infer from the input data, whether
the struct is a contiuguous block ...
      Looks like MPICH2 does. However I don't think the standard says
anything about this (Don't make any assumptions which are not in the
std)...

>> ...I'm just sending my array as a MPI_DOUBLE array...
      I think this is the *right* approach in your case.

Regards,
Jayesh

-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov
[mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Dorian Krause
Sent: Wednesday, March 04, 2009 2:52 PM
To: mpich-discuss at mcs.anl.gov
Cc: mpich2-maint at mcs.anl.gov
Subject: Re: [mpich-discuss] [mpich2-maint] #428:
MPI_Win_fencememoryconsumption

Hi

(sorry for sending the message twice ...)

Jayesh Krishna wrote:
>  Hi,
>   A contiguous MPI derived type consisting of 3 MPI_DOUBLEs is not
> equivalent to a C structure with 3 doubles. Try using an array of 3
> doubles (double[3])

But the memory layout of an array of double3 and an array of double[3] is
the same since all data is aligned to an 8 byte boundary, so no padding is
necessary (I checked it).
But in the end, MPI gets a void* passed so the actual data layout in the
buffer is irrelevant since it just greps what it thinks is the correct
data (if it isn't (e.g., for padding reasons), this is the users fault).
It this is the problem I would expect the problems to be on the origin
side, not on the target side?!

> or using an equivalent MPI datatype (eg:
> MPI_Type_create_struct()).
>  

With MPI_Type_create_struct I don't observe the mentioned problems.

Is MPI_Type_create_struct able to infer from the input data, whether the
struct is a contiuguous block or data? I read that strided data still
yields a significant drop down of performance so I this wouldn't be a good
alternative if the overhead is too large.

At the moment I'm just sending my array as a MPI_DOUBLE array of 3x the
size. This works for the moment ...

Thanks + Regards,
Dorian

>   Let us know if it works for you.
>
> Regards,
> Jayesh
>
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Rajeev Thakur
> Sent: Friday, February 27, 2009 9:07 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] MPI_Win_fence memory consumption
>
> OK, thanks. We will look into it.
>
> Rajeev
> 
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Dorian Krause
>> Sent: Friday, February 27, 2009 7:44 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: Re: [mpich-discuss] MPI_Win_fence memory consumption
>>
>> Rajeev Thakur wrote:
>>   
>>> Does that happen only with Nemesis or even with ch3:sock?
>>>        
>> The behaviour is the same with the configure flag
>> --with-device=ch3:sock.
>>
>> Dorian
>>
>>   
>>> Rajeev
>>>
>>>       
>>>> -----Original Message-----
>>>> From: mpich-discuss-bounces at mcs.anl.gov
>>>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
>>>>        
>> Dorian Krause
>>   
>>>> Sent: Friday, February 27, 2009 7:16 AM
>>>> To: mpich-discuss at mcs.anl.gov
>>>> Subject: [mpich-discuss] MPI_Win_fence memory consumption
>>>>
>>>> Hi List,
>>>>
>>>> the attached test program uses
>>>>        
>> MPI_Accumulate/MPI_Win_fence for one
>>   
>>>> sided communication with derived datatype.
>>>> The program runs fine with mpich2-1.1a2 except for my debugging
>>>> version of MPICH2 compiled with
>>>>
>>>> ./configure --with-device=ch3:nemesis --enable-g=dbg,mem,meminit
>>>>
>>>> In this case the MPI_Win_fence on the target side comuses
>>>>        
>> about 90%
>>   
>>>> of main memory (e.g. > 3 GB). As the behaviour is completely
>>>> different for predefined datatypes, I suspect that the memory
>>>> consumption is related to the construction of the derived
>>>>        
>> datatype on
>>   
>>>> the target side.
>>>>
>>>> Is there a workaround for this?
>>>>
>>>> Thanks + Best regards,
>>>> Dorian
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>            
>>    
>
>
>  



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20090304/27bbba9e/attachment-0001.htm>


More information about the mpich-discuss mailing list