[MOAB-dev] mixed mesh question

Carlos Breviglieri carbrevi at gmail.com
Tue Nov 19 07:10:00 CST 2013


Iulian,

Sorry for the delay in getting back to you... I have tested the new version
and it worked OK for my mesh sample. I will test it latter for other cases.

Also, I am sending you the CGNS reader and writer along with input cgns
mesh examples. We have worked on top of version 462, so things may look a
little out of place from the repository.

The attached patch modifies some autotools files and new files (apply with
patch -p0 -i moab462_cgns.patch). Moreover, the tar.gz contains sample
meshes (simple airfoil mesh in 2D and a mixed-mesh discretization of a 3D
wingbody geometry).

We have tested the CGNS capability with the latest library vesion 3.1.4 and
3.2-beta. Furthermore, we sticked with the base CGNS use-case, that is,
single base/zone. We will also implement latter capability to write out
user variables.

Please, feel free to improve the files and if you have any questions, let
us know. carbrevi at gmail.com and junior.hmg at gmail.com

Regards,

Carlos



On Sat, Nov 16, 2013 at 3:57 AM, Iulian Grindeanu <iulian at mcs.anl.gov>wrote:

> Hi Carlos,
> That ticket is now closed, Tim fixed the bug.
> Please let us know if it works for you.
>
> Thanks for your patience,
> Iulian
> ------------------------------
>
> Iulian,
>
> thanks for checking this out. I will be following the trac ticket. The
> mesh I used is attached, partitioned for 8 procs (all tri). I noticed that
> are much simpler sample meshes you created to investigate this issue.
> Anyway, you can use the airfoil mesh if you find it useful.
>
> I use a separate code to compute elements adjacency and then pass those to
> metis. To write the partitions into h5m I followed the code form mbzoltan
> tool. The partitioning code is correct (element-based) since I have used
> this code previously with other applications...
>
> I am from Brazil indeed, utc -3h. I'm a PhD candidate from Technological
> Institute of Aeronautics in Sao Jose dos Campos.
>
> Now I am building/adapting parts of my research code (high-order
> unstructured CFD) to use MOAB. I will stick with 462 for this, working with
> non-mixed meshes for this transition period. In the near feature I will
> also look into the structured meshes capability of the library, as well as
> the high-order elements implementation.
>
> The CGNS reader/writer is in the works too and I will submit those as
> well. My co-worker Junior, cc'ed here, is dealing with the writer part and
> we are almos done.
>
> Regards,
> Carlos Breviglieri
>
>
>
>
> On Sat, Oct 12, 2013 at 5:40 PM, Iulian Grindeanu <iulian at mcs.anl.gov>wrote:
>
>> Hi Carlos,
>> I am seeing a problem with ghosting, thank you for pointing it out.
>> Danqing and I messed up with that code :( to fix some other issues we were
>> seeing.
>> http://trac.mcs.anl.gov/projects/ITAPS/ticket/284
>>
>> It may take a while to fix it properly.
>> Thanks again,
>> Iulian
>>
>> ------------------------------
>>
>> Hmmmm,
>> Can you send me the 2d_naca0012.h5m with your partition?
>> If you do ghosting after reading, did you get the same results?
>> The elements/processors should not change ownership after ghosting, but
>> maybe they do.
>> It is indeed a work in progress, you may have found another issue :(
>> Not being able to write is probably the biggest problem.
>>
>> On what time zone do you work? :) Also, where are you from? The name
>> looks Brazilian, Portuguese, or Italian ? Or Spanish?
>>
>> Thanks,
>> Iulian
>>
>> ------------------------------
>>
>> Hi Iulian,
>>
>> I have just ran some tests with the latest clone of the master repo
>> (saturday morning). Here are my findings with MOAB 470pre:
>>
>> Now I am able to read partitined mixed meshes without errors. However, if
>> I plot the owned entitied for a given proc, even for non-mixed meshes, they
>> are different from the ones obtained by the partitioner, see below
>>
>> Mesh distribution for 2d_naca0012.h5m (all tri mesh) from partitioner,
>> over 8 procs:
>> proc[0] has 866 elements
>> proc[1] has 863 elements
>> proc[2] has 866 elements
>> proc[3] has 869 elements
>> proc[4] has 872 elements
>> proc[5] has 869 elements
>> proc[6] has 862 elements
>> proc[7] has 877 elements
>>
>> Mesh distribution seen by MOAB 470pre
>> ("PARALLEL=READ_PART;PARTITION=PARALLEL_PARTITION;PARALLEL_RESOLVE_SHARED_ENTS;PARALLEL_GHOSTS=2.0.1")
>> owned_entities[5], size = 864
>> owned_entities[6], size = 853
>> owned_entities[7], size = 871
>> owned_entities[1], size = 859
>> owned_entities[3], size = 860
>> owned_entities[4], size = 867
>> owned_entities[0], size = 866
>> owned_entities[2], size = 861
>>
>> besides procs 0, all other report wrong number of entities. Code to
>> compute such distribution is below (based on example/HelloParMOAB.cpp). The
>> mixed mesh (2d_naca0012_mixed.h5m) now is read  (does not crash) but the
>> distribution is off as well. With MOAB 462 the distribution is OK for
>> homogeneous meshes.
>>
>> Moreover, with MOAB 470pre, no output is written to disk with
>> PARALLEL=WRITE_PART, regardless of the mesh type. Using PARALLEL=NONE
>> works, but only one part of the domain is written, as expected.
>>
>> I understand that this is a work in progress. If you need more
>> information, let me know.
>>
>> Regards,
>>
>> Carlos Breviglieri
>>
>>
>>     read_options =
>> "PARALLEL=READ_PART;PARTITION=PARALLEL_PARTITION;PARALLEL_RESOLVE_SHARED_ENTS;PARALLEL_GHOSTS=2.0.1";
>>
>>     moab::Interface* mb = new Core;
>>
>>     // Create root sets for each mesh.  Then pass these
>>     // to the load_file functions to be populated.
>>     EntityHandle rootset, partnset;
>>     mb->create_meshset(MESHSET_SET, rootset);
>>     mb->create_meshset(MESHSET_SET, partnset);
>>
>>     // Create the parallel communicator object with the partition handle
>> associated with MOAB
>>     ParallelComm *pcomm = ParallelComm::get_pcomm(mb, partnset, &myComm);
>>
>>     // Load the file from disk with given options
>>     mb->load_file(meshFile.c_str(), &rootset, read_options.c_str());
>>
>>     // Get all entities of dimension = dim
>>     Range elemRange, owned_entities;
>>     int dim = 2;
>>     mb->get_entities_by_dimension(rootset, dim, elemRange, false);
>>
>>     pcomm->filter_pstatus(elemRange, // pass entities that we want to
>> filter
>>                           PSTATUS_NOT_OWNED, // status we are looking for
>>                           PSTATUS_NOT, // operation applied ; so it will
>> return owned entities (!not_owned = owned)
>>                           -1, // this means all processors
>>                           &owned_entities);
>>
>>     std::vector<int> procID(owned_entities.size(), myRank);
>>
>>     std::cout << "owned_entities[" << myRank << "], size = " <<
>> owned_entities.size() << std::endl;
>>
>>     Tag procID_tag;
>>     mb->tag_get_handle("PROC_ID", 1, MB_TYPE_INTEGER, procID_tag,
>> MB_TAG_CREAT | MB_TAG_DENSE, &procID[0]);
>>
>>     mb->tag_set_data(procID_tag, owned_entities, &procID[0]);
>>
>>     // WRITE_PART writes all partitions to a single output file (only h5m
>> format supports parallel IO at the moment).
>>     // One can use the mbconvert tool to convert the output to other
>> formats.
>>     mb->write_file(outputFile.c_str(), "H5M", "PARALLEL=WRITE_PART");
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Oct 10, 2013 at 10:37 AM, Tim Tautges <tautges at mcs.anl.gov>wrote:
>>
>>> Yeah, too complicated to backport, and latest works for Carlos anyway.
>>>
>>> - tim
>>>
>>> On 10/09/2013 09:41 PM, Iulian Grindeanu wrote:
>>>
>>>>
>>>>
>>>> ------------------------------------------------------------
>>>> ------------------------------------------------------------
>>>>
>>>>
>>>>     Both worked for me, with the current code (4.7.0pre)
>>>>
>>>>     I don't get your error :(
>>>>     What version are you using? I will try 4.6.2, but it should be fine
>>>> there too :(
>>>>
>>>>
>>>>     OK, I got an error with more quads, on 4.6.2 maybe I mixed them up
>>>> when I saved :(
>>>>       mpiexec -np 2 /home/iulian/source/MOAB46/tools/mbconvert -O
>>>> PARALLEL=READ_PART -O PARTITION=PARALLEL_PARTITION -O
>>>>     PARALLEL_RESOLVE_SHARED_ENTS -O  PARALLEL_GHOSTS=2.0.1  -o
>>>> PARALLEL=WRITE_PART
>>>>     /home/iulian/tmp/2d_naca0012_mixed2.h5m 2.h5m
>>>>     Leaked HDF5 object handle in function at
>>>> ../../../moab46source/src/io/ReadHDF5.cpp:1523
>>>>     Open at entrance: 1
>>>>     Open at exit:     2
>>>>     Leaked HDF5 object handle in function at
>>>> ../../../moab46source/src/io/ReadHDF5.cpp:827
>>>>     Open at entrance: 1
>>>>     Open at exit:     2
>>>>     Failed to load "/home/iulian/tmp/2d_naca0012_mixed2.h5m".
>>>>     Error code: MB_INDEX_OUT_OF_RANGE (1)
>>>>     Error message: Failed in step PARALLEL READ PART
>>>>     Cannot close file with open handles: 0 file, 1 data, 0 group, 0
>>>> type, 0 attr
>>>>
>>>>
>>>>     I will look into it.
>>>>
>>>> Hi Carlos,
>>>> It looks like it is a bug in 4.6.2.
>>>> I don't know if it will be fixed, there are some important changes in
>>>> ghosting in current version.
>>>> So for Version4.6 branch, the model with 17 quads works fine if you
>>>> don't do ghosting:
>>>>
>>>> iulian at T520-iuli:~/source/MOAB46$ mpiexec -np 2
>>>> /home/iulian/source/MOAB46/tools/mbconvert -O PARALLEL=READ_PART -O
>>>> PARTITION=PARALLEL_PARTITION -O PARALLEL_RESOLVE_SHARED_ENTS  -o
>>>> PARALLEL=WRITE_PART
>>>> /home/iulian/tmp/2d_naca0012_mixed_invert.h5m 2.h5m
>>>> Read "/home/iulian/tmp/2d_naca0012_mixed_invert.h5m"
>>>> Wrote "2.h5m"
>>>>
>>>> I would recommend upgrading to current version.
>>>> That code is pretty complicated, and I am not sure if we will backport
>>>> changes to Version4.6 branch.
>>>>
>>>> Tim, what do you suggest? Should I try to backport some changes in
>>>> ParallelComm? I know you are working on that code.
>>>>
>>>> Thanks,
>>>> Iulian
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>> --
>>> ================================================================
>>> "You will keep in perfect peace him whose mind is
>>>   steadfast, because he trusts in you."               Isaiah 26:3
>>>
>>>              Tim Tautges            Argonne National Laboratory
>>>          (tautges at mcs.anl.gov)      (telecommuting from UW-Madison)
>>>  phone (gvoice): (608) 354-1459      1500 Engineering Dr.
>>>             fax: (608) 263-4499      Madison, WI 53706
>>>
>>>
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20131119/0659db11/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cgns_moab_462.tar.gz
Type: application/x-gzip
Size: 9210837 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20131119/0659db11/attachment-0002.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: moab462_cgns.patch
Type: text/x-patch
Size: 72897 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20131119/0659db11/attachment-0003.bin>


More information about the moab-dev mailing list