PnetCDF, ROMIO & "noac" setting

Carl Ponder cponder at nvidia.com
Tue Apr 17 09:33:14 CDT 2018


*On 04/17/2018 08:04 AM, Carl Ponder wrote:*
>
>     my sysadmin's tell me that NFS is all that we're ever going to
>     have on our cluster.
>     I can live with substandard I/O performance, I just want the
>     various I/O libraries to give the right data.
>     We can try following these ROMIO instructions, but can you answer
>     some questions about it?
>
>     1) It says that ROMIO is used in MPICH. Just to confirm, do
>     MVAPICH2 & OpenMPI & Intel/MPI all use ROMIO?
>         And even if not, would the instructions still be relevant to
>     these other MPI's?
>
>     2) It talks about NFS version 3. It looks like we have a
>     combination of 3 & 4 on our system.
>          Are the instructions relevant to NFS version 4?
>
>     3) Given the reservations about performance, I suppose I could ask
>     for a special directory to be created & mounted for "coherent"
>     operations, and leave the other mounts as-is.
>         Do you see any problems with doing this?
>
*On 04/17/2018 08:15 AM, Latham, Robert J. wrote:*
>
>     I would do the following:
>
>     NFS is not a parallel file system.  Read-only from NFS might be
>     OK.  Writing is trickier because NFS implements bizzare caching
>     rules.  Sometimes the clients hold onto data for a long time, even
>     after you ask every way possible to plase write this data out to disk.
>
Robert -- You mean that setting "noac" doesn't help, or that it isn't a 
full fix?
>
>     I would use collective I/O, but then tell the MPI-IO library to carry
>     out all I/O from just one rank.   For ROMIO if you set the hint
>     "cb_nodes" to "1", then all clients will use a single aggregator to
>     write, and you can hopefully sidestep any caching problems.
>
>     You should also set "romio_cb_write" to "enable" to force all writes,
>     even ones ROMIO might think are nice friendly large contiguous
>     requests, to take the two-phase collective buffering path.
>
I'll try to find a way to set these in the MPI environment modules:

    export cb_nodes=1
    export romio_cb_write=enable



-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20180417/bd946ba2/attachment-0001.html>


More information about the parallel-netcdf mailing list