[MOAB-dev] question about reduce_tags behavior
Lorenzo Alessio Botti
ihabiamx at yahoo.it
Fri May 4 10:27:37 CDT 2012
Hi Tim,
I've found the bug in my code...
To get the sum of a tag on shared entities I need to call
pcomm->reduce_tags(gid_tag_vec, gid_tag_vec, MPI_SUM, sharedFaces);
and not
pcomm->reduce_tags(gid_tag_vec, gid_sum_tag_vec, MPI_SUM, sharedFaces);
Here is the new version, now it works...
-------------- next part --------------
A non-text attachment was scrubbed...
Name: partcheck.cpp
Type: application/octet-stream
Size: 5295 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20120504/adf40321/attachment-0002.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: square_10x10_100p_quad4.h5m
Type: application/octet-stream
Size: 692564 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20120504/adf40321/attachment-0003.obj>
-------------- next part --------------
Thanks for help
Lorenzo
On Mar 30, 2012, at 4:20 PM, Tim Tautges wrote:
> [cc'ing to moab-dev in case others are interested...]
>
> I've inserted a few comments below in your code, but in general I think you've got the right idea, and if tag_reduce doesn't work it's probably a bug. If you could isolate the problem in a simple example (with mesh input), I'd be happy to look at it. There is a unit test now for this capability, so it's mostly tested. If I find a bug from your example, I'll fix and add a test for it.
>
> On 03/30/2012 08:28 AM, Lorenzo Alessio Botti wrote:
>> Hi Tim,
>> first of all sorry to bother you with something that is not fully tested yet.
>> I have just installed moab-4.5...
>>
>> After having created a non unique global id for shared faces which I have stored as an integer in gid_tag,
>> I'd like to know if the following
>>
>> int zero = 0;
>> Tag gid_sum_tag;
>> result = _instance.tag_get_handle("__global_id_sum", 1, MB_TYPE_INTEGER,
>> gid_sum_tag, MB_TAG_DENSE|MB_TAG_CREAT, &zero);
>> Range sharedFaces;
>> result = _instance.get_entities_by_dimension(0,dim-1,sharedFaces);
>> result = pcomm->filter_pstatus(sharedFaces, PSTATUS_SHARED, PSTATUS_AND);
>
> Probably a better way to get shared faces is to use:
>
> ErrorCode get_shared_entities(int other_proc,
> Range &shared_ents,
> int dim = -1,
> const bool iface = false,
> const bool owned_filter = false);
>
>> std::vector<Tag> gid_tag_vec;
>> gid_tag_vec.push_back(gid_tag);
>> std::vector<Tag> gid_sum_tag_vec;
>> gid_sum_tag_vec.push_back(gid_sum_tag);
>> pcomm->reduce_tags(gid_tag_vec, gid_sum_tag_vec, MPI_SUM, sharedFaces);
>
> I'll make another variant with two tag handles, so you don't need to create two temporary std::vector's.
>
> - tim
>
>>
>> is expected to put the sum of gid_tag values in the sharers in gid_sum_tag.
>> My goal is then to subtract gid_tag from gid_sum_tag so that the owner processor gets the global_id that
>> I have assigned to the shared face on the not owner processor and vice versa.
>> I actually get a MB_FAILURE out of reduce_tags but I'm wondering if the behavior
>> I expect is the correct one.
>>
>> If so I just have to wait... I don't want to put pressure.
>> The problem is that with exchange_tags I'm able to get the "ghost" face ids only on the processor that
>> doesn't own the face and this is forcing me to write a different kind of assembly code.
>>
>> The good news is that the agglomerated code is working in serial and parallel, see the attached files.
>> In 2d the element edges are sketched white lines, in 3d the visualization of elements faces is a bit tricky...
>> The dG p2 solution of the Laplace equation is as good as on a standard grid.
>>
>> Thanks for help.
>> Lorenzo
>
> --
> ================================================================
> "You will keep in perfect peace him whose mind is
> steadfast, because he trusts in you." Isaiah 26:3
>
> Tim Tautges Argonne National Laboratory
> (tautges at mcs.anl.gov) (telecommuting from UW-Madison)
> phone (gvoice): (608) 354-1459 1500 Engineering Dr.
> fax: (608) 263-4499 Madison, WI 53706
>
More information about the moab-dev
mailing list