[MOAB-dev] problem with exchange_ghost_cells

Lorenzo Alessio Botti bottilorenzo at gmail.com
Mon Nov 18 11:23:08 CST 2013


Thanks for the quick fix, it really helps! 
I’m going to use it since the (small) extra cost worth the increased robustness.

Bests.
Lorenzo

On 18 Nov 2013, at 16:52, Iulian Grindeanu <iulian at mcs.anl.gov> wrote:

> Thank you, Lorenzo, for reporting it;
> I have tried with the latest repo, and indeed, it hangs out with mesh.h5m.
> 
> I have tried with bridge dimension 0 and 1, and it seems to get out of ghosting;
> So for bridge dimension 2 that you are using, there is a problem.
> "
>  if (pcomm->rank() == 0)
>     std::cout<<pcomm->rank()<<" Exchange ghost cells"<<std::endl;
>   result = pcomm->exchange_ghost_cells(dim,1,1,0,true,true);
>     assert(MB_SUCCESS==result);
> "
> In the meantime, can you use bridge 0 or 1? 
> 
> Thanks,
> Iulian
> 
> Dear all,
> I’m writing to inform you all of a problem with MOAB in parallel that bothers me a lot.
> Sometimes after reading a parallel .h5m mesh file and resolving shared entities the execution hangs while exchanging ghost cells.
> The issue arises randomly but I noticed a clear tendency to occur when the number of elements in each partition decreases, that is, given the same mesh, when the number of partition increases.
> Here attached is a simple code that should allow you to reproduce the issue (I tried to run both with 4.6.0 and trunk and obtain the same behavior).
> mesh.h5m and mesh_ok.h5m are two different 16 part partitions of the same mesh (a 3D tetrahedral elements mesh). 
> While with mesh_ok.h5m the code works perfectly, using mesh.h5m it hangs, meaning that the process with rank 1 does not return from exchange_ghost_cells if the code is executed in parallel with 16 processors (mpiexec -n 16 <reader_exec_name>). 
> The weird thing is that the code works if executed on less that 16 processors  
> 
> Any help is appreciated. Thanks in advance.
> Lorenzo

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20131118/218f261b/attachment.html>


More information about the moab-dev mailing list