pnetcdf and large transfers
Rob Latham
robl at mcs.anl.gov
Tue Jul 2 11:00:22 CDT 2013
On Tue, Jul 02, 2013 at 10:39:12AM -0500, Wei-keng Liao wrote:
> I am speaking from an academic point of view :)
> The number of transfers may not be that straightforward.
> Although in blocking APIs, the buffers are all contiguous,
> fileviews may be not. The performance of a collective I/O relies
> on the fact whether its aggregate access region is contiguous
> or not. There can a case that the aggregate access region is contiguous
> if transfer once, but individual aggregate access regions are not
> if the request is broken down in multiple transfers. On the other hand,
> whether I/O load is balanced among the processes is important, but
> less serious. I mentioned this to you in SDAV AHM, if you remember.
>
> I guess for blocking APIs, the worst scenario might rarely happen.
> But for nonblocking APIs, the problem can occur more often.
That's a good argument for describing all the data in a single
request.
I don't know, though, how we get around the defective MPI
implementation problem. I'm all for sticking to the standard and
insisting "i'm right, darn it!", but I expect our pnetcdf end-users
are more interested in getting work done than making an ideological
stand?
Under SDAV we can fix ROMIO and MPICH, and we can maybe come up with
enough test cases to get Cray and IBM's attention (though both vendors
recently delivered their big machines, so it's hard to say how lucky
we'd be on that front)
I think we can do the following in the short term:
- continue to return an error when more than 2 gigabytes of data is
requested in the non-blocking case
- split up large blocking I/O
- hammer away at MPI vendors with "large datatype" test cases until we
are reasonably sure pnetcdf users will not have any problems
- it's too bad these defects in implementations show up at
run-time. I hate to introduce an AC_TRY_RUN since it messes
up cross-compilation, but we might have no other choice
here.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
More information about the parallel-netcdf
mailing list