pnetcdf question

Rob Ross rross at mcs.anl.gov
Thu Mar 18 12:42:37 CST 2004


Hi James,

I can think of a couple of solutions.

One thing that you could do is write two arrays, one consisting of single 
blocks of data (the "a"s and "b"s from your example), then another 
describing how those were laid out across processors:

array 1: 1a 1b 1c 2a 3a 3b 3c 3d 4a 4b 4c
array 2: 3 1 4 3

You could use a record (unlimited) variable for array 1 if you like; as 
long as you only use one record variable in a PnetCDF file, performance 
should be ok.

Alternatively you could make the array sparse, depending on how big the 
variance is between your processes.

Alternatively you could write a variable per process.

Perhaps if you described the data a little more in terms of the logical 
structure, we could better help lay the data out in the PnetCDF format (or 
suggest a better format).

Thanks,

Rob

On Wed, 17 Mar 2004, Foucar, James G wrote:

> Hello, I am having trouble figuring out how to implement a disk dump utility
> using pnetcdf. Each processor is to dump the contents of a certain array
> into a pnetcdf dataset. the problem is that the size of the array may vary
> between processors. so, my ideal dataset would look like this:
> 
> processor: 1 2 3 4 
> data:      a a a a 
>            b   b b 
>            c   c c 
>                d 
> The problem is that the data dimension length is not constant across
> processors however the define-mode functions are collective and must be
> called with the same values. Does anyone know an elegant way of getting
> around this problem?
> 
> Thanks, 
> James 
>  
> 
> 




More information about the parallel-netcdf mailing list