[MOAB-dev] gaher entities at root process

Lukasz Kaczmarczyk Lukasz.Kaczmarczyk at glasgow.ac.uk
Fri Nov 22 14:20:23 CST 2024


Hi

Thanks for this.

Kind regards,
Lukasz

From: Vijay S. Mahadevan <vijay.m at gmail.com>
Date: Friday, 22 November 2024 at 18:20
To: Lukasz Kaczmarczyk <Lukasz.Kaczmarczyk at glasgow.ac.uk>
Cc: moab-dev at mcs.anl.gov <moab-dev at mcs.anl.gov>
Subject: Re: gaher entities at root process
Dear Lukasz,

Sorry about the delay in responding. I've been swamped with various
deadlines, but I have something that you can use.

Please look at the branch: vijaysm/example-gather-mesh and under
examples/advanced/GatherMeshOnRoot.cpp, it demonstrates how one can
use ParallelComm to accumulate entities from different processes on
the root. It's what I want to call the equivalent of a MPI_Gather
operation: Mesh_Gather. You can add options to move the tags as well;
this should be trivial within this infrastructure. Let me know what
you think. It is not extensively documented but hopefully, the usage
and ideas are straightforward. The one thing that I dislike in this
workflow is that you need to perform a local "merge" of entities -
especially the shared edge/vertices. I don't exactly see how one can
avoid this at the moment.

I will submit a PR for that branch and we will get that merged as well.

Best,
Vijay

On Fri, Nov 15, 2024 at 6:35 AM Lukasz Kaczmarczyk
<Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
>
> Hi Vijay,
>
> Thank you for your response.
>
> I needed to accumulate a subset of the mesh on the root task, on top of its local portion of the mesh. I have the same internal surface distributed across a subset of processors and need to accumulate the root volumes adjacent to it. This is
> required only temporarily for some topological queries.
>
>
>
> I created a minimal working example that is not functioning as expected. I must be making a silly mistake somewhere, but I cannot figure out why it is not working. See the attached file and mesh. The mesh is partitioned for two processes.
>
>
> mpirun -np 2  ./mwe_send_entities
>
>
> Regards,
> Lukasz
>
>
>
>
> From: Vijay S. Mahadevan <vijay.m at gmail.com>
> Date: Thursday, 14 November 2024 at 15:38
> To: Lukasz Kaczmarczyk <Lukasz.Kaczmarczyk at glasgow.ac.uk>
> Cc: moab-dev <moab-dev-bounces at mcs.anl.gov>
> Subject: Re: gaher entities at root process
>
> Hi Lukasz,
>
> Let me understand the context correctly here. Do you want to
> accumulate some subset of the mesh on the root task, on top of its
> local piece of the mesh? Or is the motivation to collect the entities
> on the root to perform some other task on them that only the root
> process is capable of achieving?
>
> > An easier approach, but a bit wasteful,  is probably to keep the entire mesh on the zero processors while the rest remains distributed and data from the parts is gathered. That I can make working.,
>
> I would not recommend this, especially since the approach will not
> scale well with mesh resolution.
>
> Your usage of send_entities/recv_messages looks correct according to
> the API, but why are you not using recv_entities to be consistent
> here? Having a MWE will help debug and provide a better solution for
> this particular workflow for meshes of interest.
>
> In our climate simulations, we handle migration of entire meshes from
> one set of processes to another set with adaptive repartitioning.
> These involve communication of entities between processes and I'll see
> if we can create an example out of that to just send parts of a mesh
> to the root.
>
> Best,
> Vijay
>
> On Wed, Nov 13, 2024 at 7:31 PM Lukasz Kaczmarczyk
> <Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
> >
> > Hi Vijay,
> >
> >
> > I am looking for advice on how to gather some entities on the zero processor. Is there a good way to do this? I couldn't find a working example for send_entities and recv_messages from pcomm.
> >
> > An easier approach, but a bit wasteful,  is probably to keep the entire mesh on the zero processors while the rest remains distributed and data from the parts is gathered. That I can make working.,
> >
> > However, I am not sure what the best policy is.
> >
> > What is my mistake:
> >
> >     int to_proc = 0;
> >
> >     Range orig_ents = ents;
> >
> >     bool adjs = false;
> >
> >     bool tags = false;
> >
> >     bool store_remote_handles = false;
> >
> >     bool is_iface = false;
> >
> >     Range final_ents;
> >
> >     int incoming1 = 0;
> >
> >     int incoming2 = 0;
> >
> >     TupleList entprocs;
> >
> >     std::vector<MPI_Request> recv_remoteh_reqs;
> >
> >     bool wait_all = true;
> >
> >
> >
> >     int from_proc = 1;
> >
> >     std::vector<std::vector<EntityHandle>> L1hloc;
> >
> >     std::vector<std::vector<EntityHandle>> L1hrem;
> >
> >     std::vector<std::vector<int>> L1p;
> >
> >     std::vector<EntityHandle> L2hloc;
> >
> >     std::vector<EntityHandle> L2hrem;
> >
> >     std::vector<unsigned int> L2p;
> >
> >
> >
> >     pcomm->recv_messages(
> >
> >         from_proc, store_remote_handles, is_iface, final_ents, incoming1,
> >
> >         incoming2, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, recv_remoteh_reqs);
> >
> >
> >
> >     pcomm->send_entities(to_proc, orig_ents, adjs, tags,
> >
> >                                 store_remote_handles, is_iface, final_ents,
> >
> >                                 incoming1, incoming2, entprocs,
> >
> >                                 recv_remoteh_reqs, wait_all);
> >
> >
> >
> >
> > Kind Regards,
> > Lukasz
> >
> >


More information about the moab-dev mailing list