[mpich-discuss] ROMIO: Read overlapping regions - Strange results

Rajeev Thakur thakur at mcs.anl.gov
Tue Aug 24 16:56:52 CDT 2010


Thanks for the detailed bug report. Both MPICH2 and MPICH1 give the same result, whereas IBM MPI (for POWER systems), which does not use ROMIO, gives the correct result. There must be a bug in the way ROMIO processes filetypes that have overlap. If I change the test program to remove the overlap (set dsp[1]=2), the output is correct.

Rajeev


On Aug 24, 2010, at 9:58 AM, Pascal Deveze wrote:

> Hi all,
> 
> In the MPI2 Standard, chapter "File views" (http://www.mpi-forum.org/docs/mpi-20-html/node184.htm#Node184) :
> "If the file is opened for writing, neither the etype nor the filetype is permitted to contain overlapping regions. This restriction is equivalent to the `datatype used in a receive cannot specify overlapping regions' restriction for communication."
> I understand that if the file is opened for reading, overlapping regions are permitted.
> I tried with a very simple example and the results with mpich2-1.2.1p1 are surprising.
> 
> The filetype is described by:
>         dsp[0]= 0;
>         lng[0] = 2;
>         dsp[1]= 1;
>         lng[1] = 2;
>         err = MPI_Type_indexed (2, lng, dsp, MPI_CHAR, &filetype);
> With this definition, there is an overlap in the byte at offset 1.
> The file contains following values in bytes: 0,1,2,3,4,5,6,7,8,9,10, ....
> I expect to read "0, 1, 1, 2, 3, 4, 4, 5, 6, 7, 7, 8" (each value 3n+1 must be doubled).
> 
> The results I get depend on how I read the file :
> Read 1 bytes per 1 bytes:   0  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
> Read 2 bytes per 2 bytes:   0  1  1  1  1  2  3  4  4  4  4  5  6  7  7  7  7  8  9 10
> Read 3 bytes per 3 bytes:   0  1  1  1  2  3  4  4  5  6  7  7  7  8  9 10 10 11 12 13
> Read 4 bytes per 4 bytes:   0  1  1  2  3  4  4  5  6  7  7  8  9 10 10 11 12 13 13 14
> Read 5 bytes per 5 bytes:   0  1  1  2  3  4  4  5  6  7  7  7  8  9 10 10 10 11 12 13
> Read 6 bytes per 6 bytes:   0  1  1  2  3  4  4  4  5  6  7  7  7  8  9 10 10 11 12 13
> Read 7 bytes per 7 bytes:   0  1  1  2  3  4  4  4  5  6  7  7  8  9 10 10 11 12 13 13
> Read 8 bytes per 8 bytes:   0  1  1  2  3  4  4  5  6  7  7  8  9 10 10 11 12 13 13 14
> Read 9 bytes per 9 bytes:   0  1  1  2  3  4  4  5  6  7  7  8  9 10 10 11 12 13 13 13
> Only the results "4 bytes per 4 bytes" and "8 bytes per 8 bytes" are correct. The others are false.
> 
> Is it possible to read overlapping regions ?
> Or is there something I misunderstand ?
> 
> Pascal
> 
> ============= Simple program ==========
> #include <stdio.h>
> #include "mpi.h"
> 
> #define file "/tmp/TESTFILE"
> #define IO_SIZE 30
> char buffer[IO_SIZE];
> int dsp[2], lng[2], myid, i, n, err, read_nb;
> MPI_Datatype filetype;
> MPI_Status status;
> MPI_File fh;
> 
> int main(int argc,char **argv) {
>  MPI_Init(&argc,&argv);
>  MPI_Comm_rank(MPI_COMM_WORLD,&myid);
> 
>  if (!myid) {
>         err = MPI_File_open(MPI_COMM_SELF, file, MPI_MODE_CREATE | MPI_MODE_RDWR, MPI_INFO_NULL, &fh);
>         for (i=0; i<IO_SIZE; i++)  buffer [i] = i;
>         err = MPI_File_write (fh, buffer, IO_SIZE, MPI_CHAR, &status);
> 
>         dsp[0]= 0;
>         lng[0]= 2;
>         dsp[1]= 1; /* overlap at offset 1 */
>         lng[1]= 2;
>         err = MPI_Type_indexed (2, lng, dsp, MPI_CHAR, &filetype);
>         err = MPI_Type_commit (&filetype);
> 
>         err = MPI_File_set_view(fh, 0, MPI_CHAR, filetype, "native", MPI_INFO_NULL);
> 
>         for (n=1; n<10; n++) {
>                 read_nb=IO_SIZE/n;
>                 MPI_File_seek(fh, 0, MPI_SEEK_SET);
>                 // Read the file n bytes per nbytes
>                 for (i=0; i<read_nb; i++) MPI_File_read(fh, buffer+i*n, n, MPI_CHAR, &status);
>                 // Display result
>                 printf("Read %d bytes per %d bytes: ", n);
>                 for (i=0; i<20; i++) printf(" %2d", buffer[i]);
>                 printf("\n");
>         }
>         err = MPI_File_close(&fh);
>  }
>  MPI_Finalize();
> }
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



More information about the mpich-discuss mailing list