[MOAB-dev] mixed mesh question

Carlos Breviglieri carbrevi at gmail.com
Tue Nov 19 13:25:01 CST 2013


Iulian,

I had a similar issue in the past. I recall that I edited the cgns.m4 but
not exactly which part, probably the section below to link directly with
hdf5_hl, instead of plain hdf5.

I use Arch linux but not the system libraries. I compile hdf5 and cgns in a
local directory and link those.

I will take a look into the m4 file and see if I can reproduce the problem.

Regards,

Carlos


      # Check if cgns is usable by itself
  AC_CHECK_LIB( [cgns], [cg_open], [CGNS_LIBS="-lcgns"], [
      # Check if cgns is usable with HDF5
    unset ac_cv_lib_cgns
    unset ac_cv_lib_cgns_cg_open
      # If we haven't already looked for HDF5 libraries, again now incase
      # they're in the CGNS lib directory.
    FATHOM_DETECT_HDF5_LIBS
    LDFLAGS="$LDFLAGS $HDF5_LDFLAGS"
    AC_CHECK_LIB( [cgns], [cg_open], [CGNS_LIBS="-lcgns -lhdf5_hl"], [
      # Try one more time with HDF5 and libcurl
      unset ac_cv_lib_cgns
      unset ac_cv_lib_cgns_cg_open
      AC_CHECK_LIB( [cgns], [cg_open], [CGNS_LIBS="-lcgns -lhdf5_hl
-lcurl"],
        [HAVE_CGNS=no], [-lhdf5_hl $HDF5_LIBS -lcurl] )],
      [-lhdf5_hl $HDF5_LIBS] )],
    )





On Tue, Nov 19, 2013 at 3:42 PM, Iulian Grindeanu <iulian at mcs.anl.gov>wrote:

> Thanks Carlos,
> I have some issues configuring moab with cgns;
> I did not rebuild hdf5, we are using version 1.8.8; cgns seems to be built
> without problems (I used ccmake)
> But something is wrong with configuring with cgns.
> Do you have any suggestions (besides rebuilding hdf5, cgns, ...)
>
> the symbols needed from hdf5 seem to be there, so I can't really explain
> the error. Do you use ubuntu 12 or ubuntu 10? Or something else?
>
>
> /homes/fathom/3rdparty/hdf5-1.8.8-par-mpich2.1.5-gcc/lib> nm libhdf5.so |
> grep H5T_NATIVE_SCHAR_g
> 00000000004cc170 D H5T_NATIVE_SCHAR_g
> jun/homes/fathom/3rdparty/hdf5-1.8.8-par-mpich2.1.5-gcc/lib> nm libhdf5.so
> | grep H5Tget_native_type
> 00000000002424f0 T H5Tget_native_type
>
> ...
> configure:32106: result: no
> configure:32115: checking for cg_open in -lcgns
> configure:32148: /homes/fathom/3rdparty/mpich2/mpich2-1.5/gcc/bin/mpicc -o
> conftest  -Wall -pipe -pedantic -Wno-long-long -Wextra -Wcast-align
> -Wpointer-arith -Wformat -Wformat-security -Wshadow -Wunused-parameter -g
> -I/homes/fathom/3rdparty/cgns/include  -DVALGRIND
> -DUNORDERED_MAP_NS=std::tr1 -DHAVE_UNORDERED_MAP=tr1/unordered_map
> -DHAVE_UNORDERED_SET=tr1/unordered_set -L/homes/fathom/3rdparty/cgns/lib
> -L/homes/fathom/3rdparty/zlib/zlib-1.2.4/gcc/lib
> -L/homes/fathom/3rdparty/szip/szip-2.1/gcc/lib
> -L/homes/fathom/3rdparty/hdf5-1.8.8-par-mpich2.1.5-gcc/lib
> -L/homes/fathom/3rdparty/zlib/zlib-1.2.4/gcc/lib
> -L/homes/fathom/3rdparty/szip/szip-2.1/gcc/lib
> -L/homes/fathom/3rdparty/hdf5-1.8.8-par-mpich2.1.5-gcc/lib
> -L/homes/fathom/3rdparty/zlib/zlib-1.2.4/gcc/lib
> -L/homes/fathom/3rdparty/szip/szip-2.1/gcc/lib
> -L/homes/fathom/3rdparty/hdf5-1.8.8-par-mpich2.1.5-gcc/lib
> -L/homes/fathom/3rdparty/zlib/zlib-1.2.4/gcc/lib
> -L/homes/fathom/3rdparty/szip/szip-2.1/gcc/lib
> -L/homes/fathom/3rdparty/hdf5-1.8.8-par-mpich2.1.5-gcc/lib
> -L/homes/fathom/3rdparty/zlib/zlib-1.2.4/gcc/lib
> -L/homes/fathom/3rdparty/szip/szip-2.1/gcc/lib
> -L/homes/fathom/3rdparty/hdf5-1.8.8-par-mpich2.1.5-gcc/lib conftest.c
> -lcgns -lhdf5_hl -lhdf5   -lcurl   -lm
> -L/homes/fathom/3rdparty/mpich2/mpich2-1.5/gcc/lib
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib -L/lib/x86_64-linux-gnu
> -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../.. -lmpich -lopa -lmpl -lrt
> -lpthread -lgfortran -lm -lquadmath
> -L/homes/fathom/3rdparty/mpich2/mpich2-1.5/gcc/lib
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib -L/lib/x86_64-linux-gnu
> -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib
> -L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../.. -lmpichf90 -lmpich -lopa -lmpl
> -lrt -lpthread -lgfortran -lm -lquadmath >&5
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to `
> H5T_NATIVE_SCHAR_g'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5Tget_native_type'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5Pset_link_creation_order'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5P_CLS_DATASET_CREATE_g'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5Sget_simple_extent_npoints'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5T_IEEE_F64LE_g'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5P_CLS_FILE_CREATE_g'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5Tcopy'
> /homes/fathom/3rdparty/cgns/lib/libcgns.so: undefined reference to
> `H5Fopen'
> ...
>
> ------------------------------
>
> Iulian, here are some data on the supporting libs:
>
> =========
> HDF5 v1.8.11
> =========
>
> I compile the parallel version, but do not use the parallel IO
> capabilities yet in CGNS.
>
> ./configure \
>   CC=%s \
>   CXX=%s \
>   FC=%s \
>   F9X=%s \
>   RUNPARALLEL=%s \
>   OMPI_MCA_disable_memory_allocator=1 \
>   --prefix=%s \
>   --enable-largefile \
>   --enable-unsupported \
>   --enable-shared \
>   --disable-static \
>   --enable-production=yes \
>   --with-pthread=%s,%s \
>   --with-zlib=%s,%s \
>   --with-default-api-version=v18 \
>   --enable-parallel=yes \
>   --enable-cxx \
>   --disable-sharedlib-rpath"
>
> hdf5_config.log attached
>
> =========
> CGNS v3.2.1 (Beta)
> =========
>
> We have not considered the parallel IO capabilities as of now. There is no
> clear definition yet how to handle MIXED type element meshes, for which you
> can not specify a fixed record lenght to retrieve with hdf5. Although the
> HDF5 library I linked is mpi-enabled.
> Junior at some point tested with version 3.1.4 release 2 and it worked
> fine, AFAIK.
>
> The library is transitioning to CMake, I've used:
>
> cmake \
>   -DCGNS_BUILD_SHARED:BOOL=ON \
>   -DCMAKE_BUILD_TYPE:STRING=Release \
>   -DCMAKE_INSTALL_PREFIX:PATH=%s \
>   -DCMAKE_SKIP_RPATH:BOOL=ON \
>   -DCGNS_BUILD_CGNSTOOLS:BOOL=OFF \
>   -DCGNS_ENABLE_64BIT:BOOL=ON \
>   -DCGNS_ENABLE_FORTRAN:BOOL=OFF \
>   -DCGNS_ENABLE_PARALLEL:BOOL=ON \
>   -DCGNS_ENABLE_HDF5:BOOL=ON \
>   -DHDF5_INCLUDE_PATH:PATH=%s \
>   -DHDF5_LIBRARY:FILEPATH=%s \
>   -DHDF5_NEED_MPI:BOOL=ON \
>   -DHDF5_NEED_ZLIB:BOOL=ON \
>   -DZLIB_LIBRARY:FILEPATH=%s \
>   -DCGNS_ENABLE_SCOPING:BOOL=OFF \
>   -DCGNS_ENABLE_TESTS:BOOL=OFF \
>   -DMPIEXEC:FILEPATH=%s \
>   -DMPI_C_COMPILER:FILEPATH=%s \
>   -DMPI_C_INCLUDE_PATH:PATH=\"%s\"
>
> cgns321_CMakeCache.txt is attached.
>
> =========
> MOAB v4.6.2
> =========
>
> The patch is against the tar file from the website (
> http://ftp.mcs.anl.gov/pub/fathom/moab-4.6.2.tar.gz). There is no need
> from our side to keep it on 462. Feel free to include it in a latter
> release. It was only convenient for me to keep the patch against a fixed
> release. For this, I packaged in the previous email a tar.gz with the
> modified and new sources and sample cgns files.
> The reader and writer are serial only. We tested against homogeneous and
> heterogeneous meshes up to 20million elements and it worked. You can read a
> CGNS file, convert it to H5M and then convert it back to CGNS.
>
> The modifications to compile moab is just an --with-cgns=DIR flag.
>
> These build use shared libs for now, as I am working on my workstation.
> Latter I will experiment with static objects for clusters.
>
> If you need more details, let us know.
>
> Regards,
> Carlos
>
>
> On Tue, Nov 19, 2013 at 1:30 PM, Iulian Grindeanu <iulian at mcs.anl.gov>wrote:
>
>> Thank you Carlos and Junior, for your help.
>> I will need to build cgns library; you mentioned that you tested 3.2
>> beta, too; I will probably stick with 3.1.4, (release 2?).
>>  Also, what version of hdf5 library do you use? Should it work for any
>> version greater than 1.8?
>> I assume you build statically? Can I also get your config.log files for
>> cgns and moab?
>>
>> I assume that you are not using any of the parallel io capability,
>> because that seems to be available only in beta version.
>> So everything is read/written in serial, isn't it?
>> I am not sure if we will port the cgns reader / writer  to 4.6.2 also. Is
>> your patch against 4.6.2 tar file or against "Version4.6" branch? There
>> should not be much difference.
>>
>> Tim, what do you suggest? In a way, it should be easier to update 4.6.2.
>> But I don't want to merge to the master after that, there are too many
>> changes. This patch should work out of the box for 4.6.2, after I build
>> cgns.
>>
>> Best Regards,
>> Iulian
>>
>> ------------------------------
>>
>> Iulian,
>>
>> Sorry for the delay in getting back to you... I have tested the new
>> version and it worked OK for my mesh sample. I will test it latter for
>> other cases.
>>
>> Also, I am sending you the CGNS reader and writer along with input cgns
>> mesh examples. We have worked on top of version 462, so things may look a
>> little out of place from the repository.
>>
>> The attached patch modifies some autotools files and new files (apply
>> with patch -p0 -i moab462_cgns.patch). Moreover, the tar.gz contains sample
>> meshes (simple airfoil mesh in 2D and a mixed-mesh discretization of a 3D
>> wingbody geometry).
>>
>> We have tested the CGNS capability with the latest library vesion 3.1.4
>> and 3.2-beta. Furthermore, we sticked with the base CGNS use-case, that is,
>> single base/zone. We will also implement latter capability to write out
>> user variables.
>>
>> Please, feel free to improve the files and if you have any questions, let
>> us know. carbrevi at gmail.com and junior.hmg at gmail.com
>>
>> Regards,
>>
>> Carlos
>>
>>
>>
>> On Sat, Nov 16, 2013 at 3:57 AM, Iulian Grindeanu <iulian at mcs.anl.gov>wrote:
>>
>>> Hi Carlos,
>>> That ticket is now closed, Tim fixed the bug.
>>> Please let us know if it works for you.
>>>
>>> Thanks for your patience,
>>> Iulian
>>> ------------------------------
>>>
>>> Iulian,
>>>
>>> thanks for checking this out. I will be following the trac ticket. The
>>> mesh I used is attached, partitioned for 8 procs (all tri). I noticed that
>>> are much simpler sample meshes you created to investigate this issue.
>>> Anyway, you can use the airfoil mesh if you find it useful.
>>>
>>> I use a separate code to compute elements adjacency and then pass those
>>> to metis. To write the partitions into h5m I followed the code form
>>> mbzoltan tool. The partitioning code is correct (element-based) since I
>>> have used this code previously with other applications...
>>>
>>> I am from Brazil indeed, utc -3h. I'm a PhD candidate from Technological
>>> Institute of Aeronautics in Sao Jose dos Campos.
>>>
>>> Now I am building/adapting parts of my research code (high-order
>>> unstructured CFD) to use MOAB. I will stick with 462 for this, working with
>>> non-mixed meshes for this transition period. In the near feature I will
>>> also look into the structured meshes capability of the library, as well as
>>> the high-order elements implementation.
>>>
>>> The CGNS reader/writer is in the works too and I will submit those as
>>> well. My co-worker Junior, cc'ed here, is dealing with the writer part and
>>> we are almos done.
>>>
>>> Regards,
>>> Carlos Breviglieri
>>>
>>>
>>>
>>>
>>> On Sat, Oct 12, 2013 at 5:40 PM, Iulian Grindeanu <iulian at mcs.anl.gov>wrote:
>>>
>>>> Hi Carlos,
>>>> I am seeing a problem with ghosting, thank you for pointing it out.
>>>> Danqing and I messed up with that code :( to fix some other issues we were
>>>> seeing.
>>>> http://trac.mcs.anl.gov/projects/ITAPS/ticket/284
>>>>
>>>> It may take a while to fix it properly.
>>>> Thanks again,
>>>> Iulian
>>>>
>>>> ------------------------------
>>>>
>>>> Hmmmm,
>>>> Can you send me the 2d_naca0012.h5m with your partition?
>>>> If you do ghosting after reading, did you get the same results?
>>>> The elements/processors should not change ownership after ghosting, but
>>>> maybe they do.
>>>> It is indeed a work in progress, you may have found another issue :(
>>>> Not being able to write is probably the biggest problem.
>>>>
>>>> On what time zone do you work? :) Also, where are you from? The name
>>>> looks Brazilian, Portuguese, or Italian ? Or Spanish?
>>>>
>>>> Thanks,
>>>> Iulian
>>>>
>>>> ------------------------------
>>>>
>>>> Hi Iulian,
>>>>
>>>> I have just ran some tests with the latest clone of the master repo
>>>> (saturday morning). Here are my findings with MOAB 470pre:
>>>>
>>>> Now I am able to read partitined mixed meshes without errors. However,
>>>> if I plot the owned entitied for a given proc, even for non-mixed meshes,
>>>> they are different from the ones obtained by the partitioner, see below
>>>>
>>>> Mesh distribution for 2d_naca0012.h5m (all tri mesh) from partitioner,
>>>> over 8 procs:
>>>> proc[0] has 866 elements
>>>> proc[1] has 863 elements
>>>> proc[2] has 866 elements
>>>> proc[3] has 869 elements
>>>> proc[4] has 872 elements
>>>> proc[5] has 869 elements
>>>> proc[6] has 862 elements
>>>> proc[7] has 877 elements
>>>>
>>>> Mesh distribution seen by MOAB 470pre
>>>> ("PARALLEL=READ_PART;PARTITION=PARALLEL_PARTITION;PARALLEL_RESOLVE_SHARED_ENTS;PARALLEL_GHOSTS=2.0.1")
>>>> owned_entities[5], size = 864
>>>> owned_entities[6], size = 853
>>>> owned_entities[7], size = 871
>>>> owned_entities[1], size = 859
>>>> owned_entities[3], size = 860
>>>> owned_entities[4], size = 867
>>>> owned_entities[0], size = 866
>>>> owned_entities[2], size = 861
>>>>
>>>> besides procs 0, all other report wrong number of entities. Code to
>>>> compute such distribution is below (based on example/HelloParMOAB.cpp). The
>>>> mixed mesh (2d_naca0012_mixed.h5m) now is read  (does not crash) but the
>>>> distribution is off as well. With MOAB 462 the distribution is OK for
>>>> homogeneous meshes.
>>>>
>>>> Moreover, with MOAB 470pre, no output is written to disk with
>>>> PARALLEL=WRITE_PART, regardless of the mesh type. Using PARALLEL=NONE
>>>> works, but only one part of the domain is written, as expected.
>>>>
>>>> I understand that this is a work in progress. If you need more
>>>> information, let me know.
>>>>
>>>> Regards,
>>>>
>>>> Carlos Breviglieri
>>>>
>>>>
>>>>     read_options =
>>>> "PARALLEL=READ_PART;PARTITION=PARALLEL_PARTITION;PARALLEL_RESOLVE_SHARED_ENTS;PARALLEL_GHOSTS=2.0.1";
>>>>
>>>>     moab::Interface* mb = new Core;
>>>>
>>>>     // Create root sets for each mesh.  Then pass these
>>>>     // to the load_file functions to be populated.
>>>>     EntityHandle rootset, partnset;
>>>>     mb->create_meshset(MESHSET_SET, rootset);
>>>>     mb->create_meshset(MESHSET_SET, partnset);
>>>>
>>>>     // Create the parallel communicator object with the partition
>>>> handle associated with MOAB
>>>>     ParallelComm *pcomm = ParallelComm::get_pcomm(mb, partnset,
>>>> &myComm);
>>>>
>>>>     // Load the file from disk with given options
>>>>     mb->load_file(meshFile.c_str(), &rootset, read_options.c_str());
>>>>
>>>>     // Get all entities of dimension = dim
>>>>     Range elemRange, owned_entities;
>>>>     int dim = 2;
>>>>     mb->get_entities_by_dimension(rootset, dim, elemRange, false);
>>>>
>>>>     pcomm->filter_pstatus(elemRange, // pass entities that we want to
>>>> filter
>>>>                           PSTATUS_NOT_OWNED, // status we are looking
>>>> for
>>>>                           PSTATUS_NOT, // operation applied ; so it
>>>> will return owned entities (!not_owned = owned)
>>>>                           -1, // this means all processors
>>>>                           &owned_entities);
>>>>
>>>>     std::vector<int> procID(owned_entities.size(), myRank);
>>>>
>>>>     std::cout << "owned_entities[" << myRank << "], size = " <<
>>>> owned_entities.size() << std::endl;
>>>>
>>>>     Tag procID_tag;
>>>>     mb->tag_get_handle("PROC_ID", 1, MB_TYPE_INTEGER, procID_tag,
>>>> MB_TAG_CREAT | MB_TAG_DENSE, &procID[0]);
>>>>
>>>>     mb->tag_set_data(procID_tag, owned_entities, &procID[0]);
>>>>
>>>>     // WRITE_PART writes all partitions to a single output file (only
>>>> h5m format supports parallel IO at the moment).
>>>>     // One can use the mbconvert tool to convert the output to other
>>>> formats.
>>>>     mb->write_file(outputFile.c_str(), "H5M", "PARALLEL=WRITE_PART");
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Oct 10, 2013 at 10:37 AM, Tim Tautges <tautges at mcs.anl.gov>wrote:
>>>>
>>>>> Yeah, too complicated to backport, and latest works for Carlos anyway.
>>>>>
>>>>> - tim
>>>>>
>>>>> On 10/09/2013 09:41 PM, Iulian Grindeanu wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------
>>>>>> ------------------------------------------------------------
>>>>>>
>>>>>>
>>>>>>     Both worked for me, with the current code (4.7.0pre)
>>>>>>
>>>>>>     I don't get your error :(
>>>>>>     What version are you using? I will try 4.6.2, but it should be
>>>>>> fine there too :(
>>>>>>
>>>>>>
>>>>>>     OK, I got an error with more quads, on 4.6.2 maybe I mixed them
>>>>>> up when I saved :(
>>>>>>       mpiexec -np 2 /home/iulian/source/MOAB46/tools/mbconvert -O
>>>>>> PARALLEL=READ_PART -O PARTITION=PARALLEL_PARTITION -O
>>>>>>     PARALLEL_RESOLVE_SHARED_ENTS -O  PARALLEL_GHOSTS=2.0.1  -o
>>>>>> PARALLEL=WRITE_PART
>>>>>>     /home/iulian/tmp/2d_naca0012_mixed2.h5m 2.h5m
>>>>>>     Leaked HDF5 object handle in function at
>>>>>> ../../../moab46source/src/io/ReadHDF5.cpp:1523
>>>>>>     Open at entrance: 1
>>>>>>     Open at exit:     2
>>>>>>     Leaked HDF5 object handle in function at
>>>>>> ../../../moab46source/src/io/ReadHDF5.cpp:827
>>>>>>     Open at entrance: 1
>>>>>>     Open at exit:     2
>>>>>>     Failed to load "/home/iulian/tmp/2d_naca0012_mixed2.h5m".
>>>>>>     Error code: MB_INDEX_OUT_OF_RANGE (1)
>>>>>>     Error message: Failed in step PARALLEL READ PART
>>>>>>     Cannot close file with open handles: 0 file, 1 data, 0 group, 0
>>>>>> type, 0 attr
>>>>>>
>>>>>>
>>>>>>     I will look into it.
>>>>>>
>>>>>> Hi Carlos,
>>>>>> It looks like it is a bug in 4.6.2.
>>>>>> I don't know if it will be fixed, there are some important changes in
>>>>>> ghosting in current version.
>>>>>> So for Version4.6 branch, the model with 17 quads works fine if you
>>>>>> don't do ghosting:
>>>>>>
>>>>>> iulian at T520-iuli:~/source/MOAB46$ mpiexec -np 2
>>>>>> /home/iulian/source/MOAB46/tools/mbconvert -O PARALLEL=READ_PART -O
>>>>>> PARTITION=PARALLEL_PARTITION -O PARALLEL_RESOLVE_SHARED_ENTS  -o
>>>>>> PARALLEL=WRITE_PART
>>>>>> /home/iulian/tmp/2d_naca0012_mixed_invert.h5m 2.h5m
>>>>>> Read "/home/iulian/tmp/2d_naca0012_mixed_invert.h5m"
>>>>>> Wrote "2.h5m"
>>>>>>
>>>>>> I would recommend upgrading to current version.
>>>>>> That code is pretty complicated, and I am not sure if we will
>>>>>> backport changes to Version4.6 branch.
>>>>>>
>>>>>> Tim, what do you suggest? Should I try to backport some changes in
>>>>>> ParallelComm? I know you are working on that code.
>>>>>>
>>>>>> Thanks,
>>>>>> Iulian
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>> --
>>>>> ================================================================
>>>>> "You will keep in perfect peace him whose mind is
>>>>>   steadfast, because he trusts in you."               Isaiah 26:3
>>>>>
>>>>>              Tim Tautges            Argonne National Laboratory
>>>>>          (tautges at mcs.anl.gov)      (telecommuting from UW-Madison)
>>>>>  phone (gvoice): (608) 354-1459      1500 Engineering Dr.
>>>>>             fax: (608) 263-4499      Madison, WI 53706
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20131119/e24c529c/attachment-0001.html>


More information about the moab-dev mailing list