[mpich-discuss] Problem with MPI_File_write_all in Fortran90?
Rajeev Thakur
thakur at mcs.anl.gov
Tue Jul 24 08:09:28 CDT 2012
It's the 2D array allocation problem in C. If you declare the array as int local_array[2][2], and initialize it as local_array[i][j] = blah, not local_array[i,j] = blah, it will work.
Rajeev
On Jul 24, 2012, at 5:16 AM, Angel de Vicente wrote:
> Hi,
>
> Rajeev Thakur <thakur at mcs.anl.gov> writes:
>> Even in C, the file size should be 40K for a 100*100 integer array, not 118K. 9.8T is generous.
>
> I guess if the system says that "restart is required" after installing a
> bunch of new libraries, perhaps it is really required :-) After
> restarting, the Fortran code (simplified code, which I attach) behaves
> as expected, but now it is the C version that is giving me funny
> results.
>
> angelv$$ mpicc -o parallel_i_oc parallel_i_o.c
> angelv$$ mpif90 -o parallel_i_of parallel_i_o.f90
> angelv$$ rm datafile_c
> angelv$$ rm datafile_f
>
> angelv$$ mpiexec -n 4 ./parallel_i_oc
> angelv$$ mpiexec -n 4 ./parallel_i_of
>
> angelv$$ od -i datafile_c
> 0000000 2001 2002 102001 102002
> 0000020 -2038304232 32707 -1525747176 32711
> 0000040 202001 202002 302001 302002
> 0000060 -892304872 32686 -13377000 32668
> 0000100
>
> angelv$$ od -i datafile_f
> 0000000 1001 2001 201001 202001
> 0000020 1002 2002 201002 202002
> 0000040 101001 102001 301001 302001
> 0000060 101002 102002 301002 302002
> 0000100
> angelv$$
>
> Do you see any problem with the C code?
>
> #include "mpi.h"
>
> int main( int argc, char *argv[] )
> {
> int gsizes[2], distribs[2], dargs[2], psizes[2], rank, size, m, n;
> MPI_Datatype filetype;
> int i,j;
> MPI_File fh;
> int *local_array;
> MPI_Status status;
>
> MPI_Init( &argc, &argv );
> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> MPI_Comm_size( MPI_COMM_WORLD, &size );
>
> local_array = (int *)malloc( 2 * 2 * sizeof(int) );
>
> for (i=0;i<2;i++) {
> for (j=0;j<2;j++) {
> local_array[i,j] = rank * 100000 + (i+1)*1000 + (j+1);
> }
> }
>
> gsizes[0] = 4;
> gsizes[1] = 4;
>
> distribs[0] = MPI_DISTRIBUTE_BLOCK;
> distribs[1] = MPI_DISTRIBUTE_BLOCK;
>
> dargs[0] = MPI_DISTRIBUTE_DFLT_DARG;
> dargs[1] = MPI_DISTRIBUTE_DFLT_DARG;
>
> psizes[0] = 2;
> psizes[1] = 2;
>
>
> MPI_Type_create_darray(4, rank, 2, gsizes, distribs, dargs,
> psizes, MPI_ORDER_C, MPI_INT, &filetype);
> MPI_Type_commit(&filetype);
>
> MPI_File_open(MPI_COMM_WORLD, "datafile_c",
> MPI_MODE_CREATE | MPI_MODE_WRONLY,
> MPI_INFO_NULL, &fh);
>
> MPI_File_set_view(fh, 0, MPI_INT, filetype, "native",
> MPI_INFO_NULL);
>
> MPI_File_write_all(fh, local_array, 4,
> MPI_INT, &status);
>
> MPI_File_close(&fh);
>
> MPI_Finalize();
> return 0;
> }
> <parallel_i_o.f90>
> Thanks,
> --
> Ùngel de Vicente
> http://www.iac.es/galeria/angelv/
>
> ---------------------------------------------------------------------------------------------
> ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de Datos, acceda a http://www.iac.es/disclaimer.php
> WARNING: For more information on privacy and fulfilment of the Law concerning the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list