<div dir="ltr"><div><div>Hi Wei-keng,<br><br> "What do you do with the dumped data?"<br><br></div> Sometimes I need to compare two netcdf files. It seems that there is not a utility provided by netCDF. I notice that pnetcdf provides a utility ncmpidiff. Does it do netcdf files comparison? Is it faster? <br>
<br></div>Cheers,<br>Lyndon.<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jul 2, 2013 at 2:02 PM, Wei-keng Liao <span dir="ltr"><<a href="mailto:wkliao@ece.northwestern.edu" target="_blank">wkliao@ece.northwestern.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi, Lyndon,<br>
<br>
What do you do with the dumped data?<br>
If you can describe your I/O requirement in more detailed,<br>
maybe we can find a way not to dump the data into the text form.<br>
In most of the case, users use the binary data directly.<br>
<br>
I would say parallel netCDF is very scalable. Note a good parallel<br>
I/O performance also requires parallel file system support.<br>
There are several example programs in the software distribution.<br>
Hope they can be helpful for you.<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
Wei-keng<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
On Jul 1, 2013, at 7:53 PM, Lyndon Lu wrote:<br>
<br>
> Hi Wei-keng,<br>
><br>
> The file size for some of netCDF files we have is bigger than 2GB. The converting process will take longer.<br>
><br>
> I will rewrite our old mpi programs with pnetcdf. Could you please tell me how much is improved in performance comparing to using netcdf?<br>
><br>
> Thanks.<br>
><br>
> Cheers,<br>
> Lyndon.<br>
><br>
><br>
> On Tue, Jul 2, 2013 at 2:34 AM, Wei-keng Liao <<a href="mailto:wkliao@ece.northwestern.edu">wkliao@ece.northwestern.edu</a>> wrote:<br>
> Hi Lyndon,<br>
><br>
> Thanks for pointing this out. We have not spent much time to<br>
> improve this utility.<br>
><br>
> Do you mind telling us why you need to run this in parallel?<br>
> Note that ncdump/ncmpidump is to convert netCDF file to text form?<br>
> The standard output will be the I/O bottleneck. Unscalable performance<br>
> should be expected. If you can tell us what you intend to use this<br>
> utility, we might come up with a better solution.<br>
><br>
><br>
> Wei-keng<br>
><br>
> On Jul 1, 2013, at 1:54 AM, Lyndon Lu wrote:<br>
><br>
> > Hi Wei-keng,<br>
> ><br>
> ><br>
> > Thanks! I was expecting that the ncmpidump will take less time than using ncdump because it will run over multiple processors. In fact, it took much longer even though it can generate multiple copies of ouput. So, it may be not useful to us. Probably, it should be designed to work in the way: creating a single copy of output, but will take less time by getting it run over multiple processors, which I guess that it would be more useful than present way.<br>
> ><br>
> > Cheers,<br>
> > Lyndon.<br>
> ><br>
> ><br>
> > On Mon, Jul 1, 2013 at 2:11 PM, Wei-keng Liao <<a href="mailto:wkliao@ece.northwestern.edu">wkliao@ece.northwestern.edu</a>> wrote:<br>
> > Hi, Lyndon<br>
> ><br>
> > ncmpidump is built as an MPI application.<br>
> > Hence you should run it like all other MPI applications.<br>
> > For example, mpiexec -n 2 ncmpidump <a href="http://file.nc" target="_blank">file.nc</a><br>
> > If you run it this way, you will see two copies of the<br>
> > dumps, each printed from one of the 2 MPI processes.<br>
> ><br>
> > I often use it to print the header of a netCDF file in<br>
> > CDF-5 format, running on one process. The Unidata's<br>
> > ncdump currently cannot recognize CDF-5 format.<br>
> ><br>
> > If you have a different expectation on the outputs<br>
> > produced from ncmpidump, please let us know.<br>
> ><br>
> > Wei-keng<br>
> ><br>
> > On Jun 30, 2013, at 9:40 PM, Lyndon Lu wrote:<br>
> ><br>
> > > Hi Guys,<br>
> > ><br>
> > > I am also wondering what's advantage of using ncmpidump (under sub-dir bin) over ncdump? It seems that there is not an option in the command line to set the number of cpus for ncmpidump.<br>
> > ><br>
> > > Cheers,<br>
> > > Lyndon.<br>
> > ><br>
> > ><br>
> > > On Mon, Jul 1, 2013 at 9:55 AM, Wei-keng Liao <<a href="mailto:wkliao@ece.northwestern.edu">wkliao@ece.northwestern.edu</a>> wrote:<br>
> > > Thanks, Rajeev. The fix has been added to the SVN repo.<br>
> > ><br>
> > > Hi, Lyndon,<br>
> > > Thanks for giving our latest SVN a try. (File flex_f.f90 is<br>
> > > yet to be official released.) Your testing helps us improve<br>
> > > the software for next release. If you find any further<br>
> > > problem or questions, please let us know.<br>
> > ><br>
> > > Wei-keng<br>
> > ><br>
> > > On Jun 30, 2013, at 2:09 AM, Rajeev Thakur wrote:<br>
> > ><br>
> > > > It should be MPI_INTEGER instead of MPI_INT in Fortran.<br>
> > > ><br>
> > > > On Jun 30, 2013, at 1:45 AM, Lyndon Lu wrote:<br>
> > > ><br>
> > > >> Hi,<br>
> > > >><br>
> > > >> When running make under sub-dir examples, one of fortran program cannot be compiled successfully ( see below) and other programs seem to be fine.<br>
> > > >><br>
> > > >> /opt/openmpi/1.4.3/bin/mpif90 -c -g -I../src/lib -I../src/libf flex_f.f90<br>
> > > >> flex_f.f90(117): error #6404: This name does not have a type, and must have an explicit type. [MPI_INT]<br>
> > > >> MPI_INT, subarray, err)<br>
> > > >> ---------------^<br>
> > > >> flex_f.f90(115): error #6285: There is no matching specific subroutine for this generic subroutine call. [MPI_TYPE_CREATE_SUBARRAY]<br>
> > > >> call MPI_Type_create_subarray(2, array_of_sizes, &<br>
> > > >> ---------------^<br>
> > > >> compilation aborted for flex_f.f90 (code 1)<br>
> > > >> make: *** [flex_f.o] Error 1<br>
> > > >><br>
> > > >> Does anyone know what's wrong with this program flex_f.90? I am using pnetcdf 1.3.1/netcdf 4.3.0/hdf5 1.8.11.<br>
> > > >><br>
> > > >> In addition, could someone tell me what's advantage of using ncmpidump over ncdump? It seems that there is not an option in the command line to set the number of cpus for ncmpidump.<br>
> > > >><br>
> > > >><br>
> > > >> Thanks.<br>
> > > >><br>
> > > >> Cheers,<br>
> > > >> Lyndon<br>
> > > ><br>
> > ><br>
> > ><br>
> ><br>
> ><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>