<div>Christina,</div><div><br></div><div>Blue Gene/P is a 32-bit platform where we have hit similar problems. To get around this, we increased the size of MPI_Aint in MPICH2 to be larger than void*, to 64 bits. I suspect that your test case would work on our system, and I would like to see your test code if that is possible. It should run on our system, and I would like to make sure we have it correct.</div>
<div><br></div><div>If you are interested, we have patches against 1.0.7 and 1.1.0 that you can use (we skipped 1.0.8). If you can build MPICH2 using those patches, you may be able to run your application. On the other hand, they may be too specific to our platform. We have been working with ANL to incorporate our changes into the standard MPICH2 releases, but there isn't a lot of demand for 64-bit MPI-IO on 32-bit machines.</div>
<div><br></div><div><br></div><div>Thanks,</div><div>Joe Ratterman</div><div>IBM Blue Gene/P Messsaging</div><div><a href="mailto:jratt@us.ibm.com">jratt@us.ibm.com</a></div><div><br></div><div><br></div><br><div class="gmail_quote">
On Fri, Jul 17, 2009 at 7:12 PM, Christina Patrick <span dir="ltr"><<a href="mailto:christina.subscribes@gmail.com">christina.subscribes@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi Pavan,<br>
<br>
I ran the command<br>
<br>
$ getconf | grep -i WORD<br>
WORD_BIT=32<br>
<br>
So I guess it is a 32 bit system.<br>
<br>
Thanks and Regards,<br>
<font color="#888888">Christina.<br>
</font><div><div></div><div class="h5"><br>
On Fri, Jul 17, 2009 at 8:06 PM, Pavan Balaji<<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>> wrote:<br>
><br>
> Is it a 32-bit system? MPI_Aint is the size of a (void *), so on 32-bit<br>
> systems it's restricted to 2GB.<br>
><br>
> -- Pavan<br>
><br>
> On 07/17/2009 07:04 PM, Christina Patrick wrote:<br>
>><br>
>> Hi Everybody,<br>
>><br>
>> I am trying to create an array 32768 x 32768 x 8 bytes(double) = 8GB<br>
>> file using 16 MPI processes. However, everytime, I try doing that, MPI<br>
>> aborts. The backtrace is showing me that there is a problem in<br>
>> ADIOI_Calc_my_off_len() function. There is a variable there:<br>
>> MPI_Aint filetype_extent;<br>
>><br>
>> and the value of the variable is filetype_extent = 0 whenever it executes<br>
>> MPI_Type_extent(fd->filetype, &filetype_extent);<br>
>> Hence, when it reaches the statement:<br>
>> 335 n_filetypes = (offset - flat_file->indices[0]) /<br>
>> filetype_extent;<br>
>> I always get SIGFPE. Is there a solution to this problem? Can I create<br>
>> such a big file?<br>
>> I checked the value of the variable while creating a file of upto 2G<br>
>> and it is NOT zero which makes me conclude that there is an overflow<br>
>> when I am specifying 8G.<br>
>><br>
>> Thanks and Regards,<br>
>> Christina.<br>
>><br>
>> PS: I am using the PVFS2 filesystem with mpich2-1.0.8 and pvfs-2.8.0.<br>
><br>
> --<br>
> Pavan Balaji<br>
> <a href="http://www.mcs.anl.gov/~balaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
><br>
</div></div></blockquote></div><br>