[MOAB-dev] commit/MOAB: iulian07: example for multiple communicators

Grindeanu, Iulian R. iulian at mcs.anl.gov
Tue Jun 10 12:40:09 CDT 2014


So I think the solution is to have 2 parallel communicators for each mesh;
one parallelComm for target to resolve its sharing (on its own MPI_comm)

one PC for source to resolve its sharing (on its own MPI_comm)

a global PC, which wlll have a union of those 2 comms; this will handle global  communication, communication between target and source meshes

Right now, in the example, we use only one PC per mesh.

I don't think that changes are required in PC class; only in the coupler example.


Iulian

________________________________________
From: moab-dev-bounces at mcs.anl.gov [moab-dev-bounces at mcs.anl.gov] on behalf of Grindeanu, Iulian R. [iulian at mcs.anl.gov]
Sent: Tuesday, June 10, 2014 12:14 PM
To: Tim Tautges; moab-dev at mcs.anl.gov
Subject: Re: [MOAB-dev] commit/MOAB: iulian07: example for      multiple        communicators

I think you are right

________________________________________
From: moab-dev-bounces at mcs.anl.gov [moab-dev-bounces at mcs.anl.gov] on behalf of Tim Tautges [timothy.tautges at cd-adapco.com]
Sent: Tuesday, June 10, 2014 12:09 PM
To: moab-dev at mcs.anl.gov
Subject: Re: [MOAB-dev] commit/MOAB: iulian07: example for multiple     communicators

On 06/10/2014 12:05 PM, Grindeanu, Iulian R. wrote:
>
> In the coupler / test example, we have as many ParallelComm instances as number of files (usually 2, one for target, one for source), and one moab instance
> but in the example, both use MPI_WORLD_COMM
>
> I think we can split it in the coupler example;
>
> some work might be required to show how that can be done.
>
> maybe it makes sense to have also 2 different moab instances (one for target, one for source), at least in the example.
>

Well, that's definitely not how the Coupler stuff was designed to happen.  In all that's been done with CouPE so far,
the source and target meshes were both distributed on the world communicator.  For disjoint or at least different
partitions for source/target, the coupling (CouPE and Coupler) will need to operate on the union of both, while each
works on its own for mesh sharing.

- tim

> Iulian
>
> ________________________________________
> From: moab-dev-bounces at mcs.anl.gov [moab-dev-bounces at mcs.anl.gov] on behalf of Robert Jacob [jacob at mcs.anl.gov]
> Sent: Tuesday, June 10, 2014 11:47 AM
> To: moab-dev at mcs.anl.gov
> Subject: Re: [MOAB-dev] commit/MOAB: iulian07: example for multiple     communicators
>
> On 6/10/14 11:41 AM, commits-noreply at bitbucket.org wrote:
>>
>> The original MPI_WORLD_COMM is split into nbComms (a second
>> optional argument to ./HelloParMOAB, first is the file to load)
>> each communicator will load a full mesh in parallel, which will be distributed
>> among  its tasks
>>
>> the communicators are completely independent from MOAB's point of view, and they
>> will never "inter-communicate" in moab code.
>>
>> so MOAB will handle only "intra-communication"
>
> In MCT, we advocate that each model have its own communicator but there
> is still a copy of MPI_WORLD_COMM that can be used for
> inter-communication.  What is the solution for a coupled model with
> MOAB/Coupe?
>
> Rob
>
>>
>> Affected #:  1 file
>>
>> diff --git a/examples/HelloParMOAB.cpp b/examples/HelloParMOAB.cpp
>> index 5bcc4ab..3611a8d 100644
>> --- a/examples/HelloParMOAB.cpp
>> +++ b/examples/HelloParMOAB.cpp
>> @@ -2,6 +2,12 @@
>>     * \brief Read mesh into MOAB and resolve/exchange/report shared and ghosted entities \n
>>     * <b>To run</b>: mpiexec -np 4 HelloMoabPar [filename]\n
>>     *
>> + *  It shows how to load the mesh independently, on multiple
>> + *  communicators (with second argument, the number of comms)
>> + *
>> + *
>> + *
>> + *  mpiexec -np 8 HelloMoabPar [filename] [nbComms]
>>     */
>>
>>    #include "moab/ParallelComm.hpp"
>> @@ -26,6 +32,10 @@ int main(int argc, char **argv)
>>        test_file_name = argv[1];
>>      }
>>
>> +  int nbComms = 1;
>> +  if (argc > 2)
>> +    nbComms = atoi(argv[2]);
>> +
>>      options = "PARALLEL=READ_PART;PARTITION=PARALLEL_PARTITION;PARALLEL_RESOLVE_SHARED_ENTS";
>>
>>      // Get MOAB instance and read the file with the specified options
>> @@ -33,15 +43,38 @@ int main(int argc, char **argv)
>>      if (NULL == mb)
>>        return 1;
>>
>> +  MPI_Comm comm ;
>> +  int global_rank, global_size;
>> +  MPI_Comm_rank( MPI_COMM_WORLD, &global_rank );
>> +  MPI_Comm_rank( MPI_COMM_WORLD, &global_size );
>> +
>> +  int color = global_rank%nbComms; // for each angle group a different color
>> +  if (nbComms>1)
>> +  {
>> +    // split the communicator, into ngroups = nbComms
>> +    MPI_Comm_split( MPI_COMM_WORLD, color, global_rank, &comm );
>> +  }
>> +  else
>> +  {
>> +    comm = MPI_COMM_WORLD;
>> +  }
>>      // Get the ParallelComm instance
>> -  ParallelComm* pcomm = new ParallelComm(mb, MPI_COMM_WORLD);
>> +  ParallelComm* pcomm = new ParallelComm(mb, comm);
>>      int nprocs = pcomm->proc_config().proc_size();
>>      int rank = pcomm->proc_config().proc_rank();
>> -  MPI_Comm comm = pcomm->proc_config().proc_comm();
>> +  MPI_Comm rcomm = pcomm->proc_config().proc_comm();
>> +  assert(rcomm==comm);
>> +  if (global_rank == 0)
>> +    cout<< " global rank:" <<global_rank << " color:" << color << " rank:" << rank << " of " << nprocs << " processors\n";
>> +
>> +  if (global_rank == 1)
>> +    cout<< " global rank:" <<global_rank << " color:" << color << " rank:" << rank << " of " << nprocs << " processors\n";
>>
>> -  if (rank == 0)
>> +  MPI_Barrier(MPI_COMM_WORLD);
>> +
>> +  if (global_rank == 0)
>>        cout << "Reading file " << test_file_name << "\n  with options: " << options << endl
>> -         << " on " << nprocs << " processors\n";
>> +         << " on " << nprocs << " processors on " << nbComms << " communicator(s) \n";
>>
>>      ErrorCode rval = mb->load_file(test_file_name.c_str(), 0, options.c_str());
>>      if (rval != MB_SUCCESS) {
>> @@ -69,9 +102,9 @@ int main(int argc, char **argv)
>>      for (int i = 0; i < 4; i++)
>>        nums[i] = (int)owned_entities.num_of_dimension(i);
>>      vector<int> rbuf(nprocs*4, 0);
>> -  MPI_Gather(nums, 4, MPI_INT, &rbuf[0], 4, MPI_INT, 0, MPI_COMM_WORLD);
>> +  MPI_Gather(nums, 4, MPI_INT, &rbuf[0], 4, MPI_INT, 0, comm);
>>      // Print the stats gathered:
>> -  if (rank == 0) {
>> +  if (global_rank == 0) {
>>        for (int i = 0; i < nprocs; i++)
>>          cout << " Shared, owned entities on proc " << i << ": " << rbuf[4*i] << " verts, " <<
>>              rbuf[4*i + 1] << " edges, " << rbuf[4*i + 2] << " faces, " << rbuf[4*i + 3] << " elements" << endl;
>> @@ -109,7 +142,7 @@ int main(int argc, char **argv)
>>
>>      // gather the statistics on processor 0
>>      MPI_Gather(nums, 4, MPI_INT, &rbuf[0], 4, MPI_INT, 0, comm);
>> -  if (rank == 0) {
>> +  if (global_rank == 0) {
>>        cout << " \n\n After exchanging one ghost layer: \n";
>>        for (int i = 0; i < nprocs; i++) {
>>          cout << " Shared, owned entities on proc " << i << ": " << rbuf[4*i] << " verts, " <<
>>
>> Repository URL: https://bitbucket.org/fathomteam/moab/
>>
>> --
>>
>> This is a commit notification from bitbucket.org. You are receiving
>> this because you have the service enabled, addressing the recipient of
>> this email.
>>

--
Timothy J. Tautges
Manager, Directed Meshing, CD-adapco
Phone: 608-354-1459
timothy.tautges at cd-adapco.com


More information about the moab-dev mailing list