[mpich-discuss] extra bytes

Rajeev Thakur thakur at mcs.anl.gov
Thu Aug 9 15:28:13 CDT 2012


You need to reset the file view before the append. Otherwise it will continue to write assuming a subarray layout.

Rajeev


On Aug 9, 2012, at 2:33 PM, Kokron, Daniel S. (GSFC-610.1)[Computer Sciences Corporation] wrote:

> The attached program writes a single distributed 3D field to a file using calls to MPI_Type_create_subarray,
> MPI_File_set_view and MPI_File_write_all.  It then attempts to append two 1D variables to the end of the file
> from a single rank.  I've implemented three scenarios for the 1D writes.  See the case structure for scenarios.
> 
> The final size of the file should be [(516x516x72)+(2*72)]*4 = 76682304 Bytes.
> This is the case under scenario 0 and 1 for all process counts (isize*jsize) I've tested.
> I get the correct size under scenario 2 for all process counts tested only
> if isize is 3 or smaller.
> With isize=4, I get a file that is 76683852 B (+1548B)
> isize=5 -> 76683952  (+100B)
> isize=6 -> 76684024   (+72B)
> isize=7 -> 76684072   (+48B)
> isize=8 -> 76685912 (+1840B)
> isize=9 -> 76685968   (+56B)
> 
> These results are from using the Intel 12.1.4.319 compiler with SGI MPT-2.06 on the pleiades platform (Linux 2.6.32.54-0.3.1.20120223-nasa).
> I've also confirmed that the odd behavior occurs under MPT-1.25, IntelMPI-4.0.2.003, IntelMPI-3.1.038,
> MVAPICH2-1.8 and OpenMPI-1.5.5 on the same platform with the same compiler.
> 
> The odd behavior is present when writing to Lustre or a RAM FS on pleiades.
> 
> The odd behavior is present on the discover platform (Linux 2.6.27.54-0.2-default) using Intel 12.1.4.319 with IntelMPI-4.0.3.008 and writing to a GPFS file system.
> 
> Is this expected behavior?
> 
> Daniel Kokron
> NASA Ames (ARC-TN)
> SciCon group
> 301-286-3959
> <tst_2Dlayout.F90>_______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list