<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sun, Mar 29, 2015 at 2:41 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> On Mar 29, 2015, at 6:40 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
><br>
> On Sat, Mar 28, 2015 at 10:36 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> > On Mar 28, 2015, at 10:04 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
> ><br>
> > On Sat, Mar 28, 2015 at 9:59 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > Hakon,<br>
> ><br>
> > I have pushed to the branch barry/feature-hdf5-flexible-initial-dimension and next the change so that Vecs and Vecs obtained from DMCreateGlobalVector() with a DMDA will NOT have the extra dimension if BS == 1. To add that extra dimension even when bs == 1 with VecView() or to handle a file with that extra dimension with VecLoad() one must call PetscViewerHDF5SetBaseDimension2() or -viewer_hdf5_base_dimension2 true<br>
> ><br>
> > Please try it out and let me know if you have any trouble<br>
> ><br>
> > Thanks<br>
> ><br>
> > Barry<br>
> ><br>
> > Matt,<br>
> ><br>
> > Scream as much as you want but adding that extra dimension automatically is not intuitive so it cannot be the default.<br>
> ><br>
> > Interfaces should be intuitive, file formats should be consistent.<br>
><br>
> Hakon sent specific examples where, because of the file format, the user interface is extremely non-intuitive (below). A user loading up a "plain old vector" from a file format expects 1 a one dimension beast and that is what we should deliver to them. Now you could argue that the HDF5 interface sucks, because it bleeds the file format through the interface, that could be true, I don't care, that is what we are stuck with.<br>
><br>
> But you changed the PETSc output format in response, which is not necessary.<br>
><br>
> We could put the special case in the reader,<br>
<br>
</span> Matt,<br>
<br>
The problem is that we are not providing "the reader", nor could we! "The reader" is whatever of many many tools that the user is using to read from the HDF5 file. It might be Matlab, python, a Fortran program, a Ruby program, whatever. And the "reader code" that user has to write is directly dependent on the format used in the file. In his example<br>
<span class=""><br>
> g = h5py.File('grid.h5', 'r')<br>
> > >> x = g['/MESH/nodes/x'][:,0]<br>
<br>
</span> How do we provide this code to the user? Or are you saying we have to provide PETSc specific HDF5 readers for all packages and configurations? That is totally unrealistic and even if we did provide them no one would use them. The entire point of HDF5 is you can write your own readers, you are not stuck with using only readers provided by someone who provided the data set. </blockquote><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">There is not, and should not be a PETSc specific HDF5 format with its own readers and writers.</blockquote><div><br></div><div>But of course there already is a PETSc-specific HDF5 format. Someone made up "/MESH/nodes/x". This is the essence of HDF5,</div><div>XML, etc. that they are not actually formats, but rather storage technologies like the UNIX filesystem. Someone still has to declare</div><div>a format, and we do.</div><div><br></div><div>However, if you want the data array to be 1D in any old HDF5 tool, except when complex is used, except when timestepping is used,</div><div>then Yes, we need to make an exception for blocksize. I just think we will end up changing this back when it becomes apparent that</div><div>having to put a check in all these HDF5 tools as well is onerous.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
> so that he could load the 1D vector he expects, or we could load the 2D vector from the PETSc format.<br>
> He gets the intuitive load he wants, but we get a consistent format.<br>
><br>
> Now he might get what he expects from View(), which I am not sure he was asking for, but we have to special case all our tools which manipulate<br>
> the PETSc format.<br>
><br>
> Matt<br>
><br>
><br>
> Barry<br>
><br>
><br>
><br>
> > This gets that entirely wrong.<br>
> ><br>
> > Matt<br>
> ><br>
> > > On Mar 25, 2015, at 10:36 AM, Håkon Strandenes <<a href="mailto:haakon@hakostra.net">haakon@hakostra.net</a>> wrote:<br>
> > ><br>
> > > Did you come to any conclusion on this issue?<br>
> > ><br>
> > > Regards,<br>
> > > Håkon<br>
> > ><br>
> > > On 20. mars 2015 22:02, Håkon Strandenes wrote:<br>
> > >> On 20. mars 2015 20:48, Barry Smith wrote:<br>
> > >>> Why is 1 dimension a special case that is not worthy of its own<br>
> > >>> format? The same thing would hold for 2d and 3d. One could then argue<br>
> > >>> that we should have a single six dimensional format for the files for<br>
> > >>> all vectors that PETSc produces. Then a 1d problem has five of the<br>
> > >>> dimensions being 1.<br>
> > >><br>
> > >> This is a very good point, and support my view.<br>
> > >><br>
> > >> Let me come with two very simple example cases:<br>
> > >><br>
> > >><br>
> > >> Case 1:<br>
> > >> Create a list of grid points in an external preprocessor for the purpose<br>
> > >> of loading this into a Vec later:<br>
> > >><br>
> > >> x = np.linspace(0.0, 1.0, num=21)<br>
> > >> f.create_dataset('/MESH/nodes/x', data=x)<br>
> > >><br>
> > >> vs.<br>
> > >><br>
> > >> x = np.linspace(0.0, 1.0, num=21)<br>
> > >> x = x.reshape((21,1))<br>
> > >> f.create_dataset('/MESH/nodes/x', data=x)<br>
> > >><br>
> > >><br>
> > >> Case 2:<br>
> > >> Read three Vecs written to disk by PETSc, and calculate total "bounding<br>
> > >> box volume" of the grid:<br>
> > >><br>
> > >> g = h5py.File('grid.h5', 'r')<br>
> > >> x = g['/MESH/nodes/x']<br>
> > >> y = g['/MESH/nodes/y']<br>
> > >> z = g['/MESH/nodes/z']<br>
> > >> Vol = (xp[-1] - xp[0])*(yp[-1] - yp[0])*(zp[-1] - zp[0])<br>
> > >><br>
> > >> vs.<br>
> > >><br>
> > >> g = h5py.File('grid.h5', 'r')<br>
> > >> x = g['/MESH/nodes/x'][:,0]<br>
> > >> y = g['/MESH/nodes/y'][:,0]<br>
> > >> z = g['/MESH/nodes/z'][:,0]<br>
> > >> Vol = (x[-1] - x[0])*(y[-1] - y[0])*(z[-1] - z[0])<br>
> > >><br>
> > >><br>
> > >> In both cases I think handling this extra, unnecessary dimension makes<br>
> > >> the code less attractive. It's not that either way is difficult,<br>
> > >> problematic or impossible, but it's just that 1D Vecs should intuitively<br>
> > >> be 1D datasets, and not 2D, 3D or 6D. This seriously confused me for<br>
> > >> quite a while until I figured this out, even after having written an<br>
> > >> entire Navier-Stokes DNS solver using the PETSc library for everything<br>
> > >> except time integration and filling these simple 1D coordinate arrays!<br>
> > >><br>
> > >> Regards,<br>
> > >> Håkon<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > -- Norbert Wiener<br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>