[mpich-discuss] [mpich2-maint] #428: MPI_Win_fence memoryconsumption

Dorian Krause ddkrause at uni-bonn.de
Wed Mar 4 14:52:05 CST 2009


Hi

(sorry for sending the message twice ...)

Jayesh Krishna wrote:
>  Hi,
>   A contiguous MPI derived type consisting of 3 MPI_DOUBLEs is not
> equivalent to a C structure with 3 doubles. Try using an array of 3
> doubles (double[3]) 

But the memory layout of an array of double3 and an array of double[3] 
is the same
since all data is aligned to an 8 byte boundary, so no padding is 
necessary (I checked it).
But in the end, MPI gets a void* passed so the actual data layout
in the buffer is irrelevant since it just greps what it thinks is the 
correct data
(if it isn't (e.g., for padding reasons), this is the users fault). It this
is the problem I would expect the problems to be on the origin side, not 
on the
target side?!

> or using an equivalent MPI datatype (eg:
> MPI_Type_create_struct()).
>   

With MPI_Type_create_struct I don't observe the mentioned problems.

Is MPI_Type_create_struct able to infer from the input data, whether the 
struct is
a contiuguous block or data? I read that strided data still yields a 
significant
drop down of performance so I this wouldn't be a good alternative if the 
overhead is
too large.

At the moment I'm just sending my array as a MPI_DOUBLE array of 3x the 
size. This
works for the moment ...

Thanks + Regards,
Dorian

>   Let us know if it works for you.
>
> Regards,
> Jayesh
>
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Rajeev Thakur
> Sent: Friday, February 27, 2009 9:07 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] MPI_Win_fence memory consumption
>
> OK, thanks. We will look into it.
>
> Rajeev
>  
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov 
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Dorian Krause
>> Sent: Friday, February 27, 2009 7:44 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: Re: [mpich-discuss] MPI_Win_fence memory consumption
>>
>> Rajeev Thakur wrote:
>>    
>>> Does that happen only with Nemesis or even with ch3:sock?
>>>         
>> The behaviour is the same with the configure flag 
>> --with-device=ch3:sock.
>>
>> Dorian
>>
>>    
>>> Rajeev
>>>
>>>        
>>>> -----Original Message-----
>>>> From: mpich-discuss-bounces at mcs.anl.gov 
>>>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
>>>>         
>> Dorian Krause
>>    
>>>> Sent: Friday, February 27, 2009 7:16 AM
>>>> To: mpich-discuss at mcs.anl.gov
>>>> Subject: [mpich-discuss] MPI_Win_fence memory consumption
>>>>
>>>> Hi List,
>>>>
>>>> the attached test program uses
>>>>         
>> MPI_Accumulate/MPI_Win_fence for one
>>    
>>>> sided communication with derived datatype.
>>>> The program runs fine with mpich2-1.1a2 except for my debugging 
>>>> version of MPICH2 compiled with
>>>>
>>>> ./configure --with-device=ch3:nemesis --enable-g=dbg,mem,meminit
>>>>
>>>> In this case the MPI_Win_fence on the target side comuses
>>>>         
>> about 90%
>>    
>>>> of main memory (e.g. > 3 GB). As the behaviour is completely 
>>>> different for predefined datatypes, I suspect that the memory 
>>>> consumption is related to the construction of the derived
>>>>         
>> datatype on
>>    
>>>> the target side.
>>>>
>>>> Is there a workaround for this?
>>>>
>>>> Thanks + Best regards,
>>>> Dorian
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>             
>>     
>
>
>   



More information about the mpich-discuss mailing list