[EXTERNAL] Re: Strange behavior in ncmpio_file_set_view

Sjaardema, Gregory D gdsjaar at sandia.gov
Fri Oct 29 09:46:40 CDT 2021


Attached is a small program and file that seems to reproduce the issue I am seeing.  I think the program is correct...
..Greg

On 10/28/21, 4:16 PM, "parallel-netcdf on behalf of Sjaardema, Gregory D" <parallel-netcdf-bounces at lists.mcs.anl.gov on behalf of gdsjaar at sandia.gov> wrote:

    By changing the variable at the netCDF level from NC_INDEPENDENT to NC_COLLECTIVE, it ended up hitting the `NC_REC_COLL` test in get_varm.

    I downloaded and compiled the crash_mpiio.txt file and it runs correctly. I then recompiled using an openmpi-4.0.1 and I do get the crash, so fairly confident that the openmpi I am using has the fix applied.  I will try to create a short program fragment to reproduce the crash...

    ..Greg

    On 10/28/21, 4:06 PM, "Wei-Keng Liao" <wkliao at northwestern.edu> wrote:


        Both constants NC_REQ_INDEP and NC_REQ_COLL are defined internally in PnetCDF.
        They are not visible to user applications. I wonder how you switch them.
        In PnetCDF, NC_REQ_INDEP is used when the program is in independent mode
        and NC_REQ_COLL in collective mode. Users can switch mode by calling
        ncmpi_begin_indep_data() and ncmpi_end_indep_data().

        Googling keyword "cost_calc" leads me to this page.
        https://github.com/open-mpi/ompi/issues/6758
        Where it has a test program that can produce the error: crash_mpiio.txt
        Maybe you can give it a try to see if the openmpi-4.0.5 you are using has
        incorporated the fix?

        If that is not the case, could you provide me a short program or a code
        fragment?

        Wei-keng

        > On Oct 28, 2021, at 4:24 PM, Sjaardema, Gregory D <gdsjaar at sandia.gov> wrote:
        > 
        > I am getting a floating point exception core dump down below `ncmpio_file_set_view` with certain compilers…
        >  
        > I’ve been trying to trace it down, but am confused by the code in `get_varm` for the case when one rank has no items to get and the other rank has items to read with a non-unity stride (7 in this case).
        > This is from a code using netCDF to call down into PnetCDF.
        >  
        > Originally, the variable being read was NC_REQ_INDEP, so the rank with zero items to read would return from `get_varm` at line 464 and the other rank would continue.  It would eventually end up in `ncmpio_file_set_view` and call down and finally throw the floating point exception.
        >  
        > Since there is a comment that `MPI_File_set_view` is collective, I figured that might be the issue that only one rank was calling down that path, so changed the variable to be NC_REC_COLL.  Both ranks now call down into `ncmpio_file_set_view`, but then inside that routine, the rank with zero bytes to read falls into the first if block `if (filetype == MPI_BYTE)` and the second rank goes down further and hits the next if block `if (rank == 0) `.
        >  
        > Both ranks end up calling a MPI_File_set_view, but with different types.  The end result is that I still get a floating point exception on the rank that does have bytes to read.  The execption seems to be in `cost_calc`.
        >  
        > This is with pnetcdf-1.12.1, clang-12.0.0 (also with clang-10.0.0) and openmpi-4.0.5.
        >  
        > I’m basically looking for guidance at this point that the calling paths look correct or where to look in more depth…   Any help appreciated.
        >  
        > ..GReg



-------------- next part --------------
A non-text attachment was scrubbed...
Name: attrib.nc
Type: application/octet-stream
Size: 348 bytes
Desc: attrib.nc
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20211029/48228ab9/attachment-0002.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: attrib-test.c
Type: application/octet-stream
Size: 5203 bytes
Desc: attrib-test.c
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20211029/48228ab9/attachment-0003.obj>


More information about the parallel-netcdf mailing list