Using parallel write with subset of processors
Ross, Robert B.
rross at mcs.anl.gov
Tue May 10 10:50:38 CDT 2022
You should be able to use put_var_all, with everyone calling the call, and just have some processes indicate they have nothing to write? -- Rob
From: parallel-netcdf <parallel-netcdf-bounces at lists.mcs.anl.gov> on behalf of Pascale Garaud <pgaraud at soe.ucsc.edu>
Date: Tuesday, May 10, 2022 at 10:48 AM
To: parallel-netcdf at lists.mcs.anl.gov <parallel-netcdf at lists.mcs.anl.gov>
Subject: Using parallel write with subset of processors
Hello
I am trying to understand what would be the best way to
use the library to write data that is contained in only a subset of the
processors in my communicator. (Specifically, my problem is that I have a 3D dataset, and would like to save only a slice of that dataset to file, and the slice is contained in only a subset of the processors).
Is there a best way to do that efficiently? The code currently uses a collective PUT_VAR_ALL to write the 3D dataset to file, but that would not work for the slice (and hangs when I try).
I could just copy the whole data for the slice into a single processor, and then do an "independent" write for that processor, but that doesn't seem to be very efficient.
I tried to understand how to use IPUT instead, but I am very confused about the syntax / procedure, especially given that all of the examples I have seen end up using all processors for the write.
I am very much a beginner in this process so any help would be greatly appreciated, thank you!
Pascale Garaud.
--
--------------------------------------------------------------------------------------------------------------
Pascale Garaud
Professor in Applied Mathematics
UC Santa Cruz, California
(831)-459-1055
--------------------------------------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20220510/f59911f7/attachment.html>
More information about the parallel-netcdf
mailing list