[MOAB-dev] commit/MOAB: iulian07: add a crystal router example

Vijay S. Mahadevan vijay.m at gmail.com
Mon Feb 17 11:34:41 CST 2014


Good job Iulian. This was a much needed example.

If you were also able to test with different Intel compiler versions
and replicate the bug we were facing in this context, that would be a
useful unit test case.

Vijay

On Mon, Feb 17, 2014 at 11:22 AM,  <commits-noreply at bitbucket.org> wrote:
> 1 new commit in MOAB:
>
> https://bitbucket.org/fathomteam/moab/commits/b2e3b2514782/
> Changeset:   b2e3b2514782
> Branch:      master
> User:        iulian07
> Date:        2014-02-17 18:15:48
> Summary:     add a crystal router example
>
> crystal router is one of the building blocks of parallel
> infrastructure in MOAB, so in my opinion, it deserves
> an example.
> In this example, it is shown how to gather-scatter homogeneous
> data between processors, with one call. It is assumed that the
> communication matrix is relatively sparse, and this is the case
> for most applications.
>
> Affected #:  2 files
>
> diff --git a/examples/CrystalRouterExample.cpp b/examples/CrystalRouterExample.cpp
> new file mode 100644
> index 0000000..200f012
> --- /dev/null
> +++ b/examples/CrystalRouterExample.cpp
> @@ -0,0 +1,130 @@
> +/*
> + * This example will show one of the building blocks of parallel infrastructure in MOAB
> + * More exactly, if we have some homogeneous data to communicate from each processor to a list of other
> + * processors, how do we do it?
> + *
> + * introduce the TupleList and crystal router to MOAB users.
> + *
> + * This technology is used in resolving shared vertices / sets between partitions
> + * It is used in the mbcoupler for sending data (target points) to the proper processor, and communicate
> + *   back the results.
> + * Also, it is used to communicate departure mesh for intersection in parallel
> + *
> + *  It is a way of doing  MPI_gatheralltoallv(), when the communication matrix is sparse
> + *
> + *  It is assumed that every proc needs to communicate only with a few of the other processors.
> + *  If every processor needs to communicate with all other, then we will have to use paired isend and irecv, the
> + *  communication matrix is full
> + *
> + *  the example needs to be launched in parallel.
> + *  Every proc will build a list of tuples, that will be send to a few procs;
> + *
> + *  every proc will send 1 tuple, to proc rank + 1 and rank + rank*(size-1)+2 , with value
> + *    10000 * send + 100* rank
> + *
> + *  at the receive, we verify we received
> + *    10000 * rank + 100 * from
> + *
> + *    For some reportrank we also print the tuples.
> + *
> + *  after routing, we will see if we received, as expected. Should run on at least 2 processors.
> + *
> + * Note: We do not need a moab instance for this example
> + *
> + */
> +
> +/** @example CrystalRouterExample.cpp \n
> + * \brief generalized gather scatter using tuples \n
> + * <b>To run</b>: mpiexec -np <n> CrystalRouterExample [reportrank] \n
> + *
> + */
> +//
> +#include "moab/ProcConfig.hpp"
> +#include "moab/TupleList.hpp"
> +#include <iostream>
> +
> +using namespace moab;
> +using namespace std;
> +
> +int main(int argc, char **argv)
> +{
> +  MPI_Init(&argc, &argv);
> +
> +  int reportrank = 1;
> +  if (argc>1)
> +    reportrank = atoi(argv[1]);
> +  ProcConfig pc(MPI_COMM_WORLD);
> +  int size = pc.proc_size();
> +  int rank = pc.proc_rank();
> +
> +  if (reportrank==rank)
> +  {
> +    std::cout << " there are " << size << " procs in example\n";
> +  }
> +  // send some data from proc i to i+n/2, also to i +n/2+1 modulo n, wher en is num procs
> +
> +  gs_data::crystal_data *cd = pc.crystal_router();
> +
> +  TupleList tl;
> +
> +  // at most 100 to send
> +  // we do a preallocate with this; some tuples on some processors might need more memory, to be able
> +  // to grow locally; 100 is a very large number for this example, considering that each task sends only
> +  // 2 tuples. Some tasks might receive more tuples though, and in the process, some might grow more than
> +  // others. By doing these logP sends/receives, we do not grow local memory too much.
> +  tl.initialize(1, 1, 0, 1, 100);
> +  tl.enableWriteAccess();
> +  // form 2 tuples, send to rank+1 and rank+2 (mod size)
> +  unsigned int n = tl.get_n();
> +  int sendTo = rank+1;
> +  sendTo = sendTo%size;
> +  long intToSend = 100*rank + 10000*sendTo;
> +  tl.vi_wr[n]= sendTo;
> +  tl.vl_wr[n]= intToSend;
> +  tl.vr_wr[n]= 100.*rank;
> +  tl.inc_n();
> +
> +  n = tl.get_n();
> +  sendTo = rank+(rank+1)*rank+2;// just some number relatively different from rank
> +  sendTo = sendTo%size;
> +  intToSend = 100*rank + 10000*sendTo;
> +  tl.vi_wr[n]= sendTo;
> +  tl.vl_wr[n]= intToSend;
> +  tl.vr_wr[n]= 1000.*rank;
> +  tl.inc_n();
> +
> +  if (reportrank==rank)
> +  {
> +    std::cout << "rank " << rank << "\n";
> +    tl.print(" before sending");
> +  }
> +
> +  // all communication happens here:
> +  ErrorCode rval = cd->gs_transfer(1,tl,0);
> +
> +  if (MB_SUCCESS!= rval)
> +  {
> +    std::cout << "error in tuple transfer\n";
> +  }
> +
> +  if (reportrank==rank)
> +  {
> +    std::cout << "rank " << rank << "\n";
> +    tl.print(" after transfer");
> +  }
> +  // check that all tuples received have the form 10000* rank + 100*from
> +  unsigned int received = tl.get_n();
> +  for (int i=0; i<received; i++)
> +  {
> +    int from = tl.vi_rd[i];
> +    long valrec = tl.vl_rd[i];
> +    int remainder = valrec -10000*rank -100*from;
> +    if (remainder != 0 )
> +      std::cout << " error: tuple " << i << " received at proc rank " << rank << " from proc " << from << " has value " <<
> +         valrec << " remainder " <<  remainder << "\n";
> +  }
> +
> +  MPI_Finalize();
> +
> +  return 0;
> +}
>
> diff --git a/examples/makefile b/examples/makefile
> index 8a84899..3692b26 100644
> --- a/examples/makefile
> +++ b/examples/makefile
> @@ -8,7 +8,7 @@ include ${MOAB_DIR}/lib/iMesh-Defs.inc
>  MESH_DIR="../MeshFiles/unittest"
>
>  EXAMPLES = HelloMOAB GetEntities SetsNTags structuredmesh StructuredMeshSimple DirectAccessWithHoles DirectAccessNoHoles point_in_elem_search DeformMeshRemap
> -PAREXAMPLES = HelloParMOAB ReduceExchangeTags LloydRelaxation
> +PAREXAMPLES = HelloParMOAB ReduceExchangeTags LloydRelaxation CrystalRouterExample
>  EXOIIEXAMPLES = TestExodusII
>  F90EXAMPLES = DirectAccessNoHolesF90 PushParMeshIntoMoabF90
>
> @@ -47,6 +47,9 @@ ReduceExchangeTags : ReduceExchangeTags.o ${MOAB_LIBDIR}/libMOAB.la
>  HelloParMOAB: HelloParMOAB.o ${MOAB_LIBDIR}/libMOAB.la
>         ${MOAB_CXX} -o $@ $< ${MOAB_LIBS_LINK}
>
> +CrystalRouterExample: CrystalRouterExample.o  ${MOAB_LIBDIR}/libMOAB.la
> +       ${MOAB_CXX} -o $@ $< ${MOAB_LIBS_LINK}
> +
>  TestExodusII: TestExodusII.o ${MOAB_LIBDIR}/libMOAB.la
>         ${MOAB_CXX} -o $@ $< ${MOAB_LIBS_LINK}
>
> Repository URL: https://bitbucket.org/fathomteam/moab/
>
> --
>
> This is a commit notification from bitbucket.org. You are receiving
> this because you have the service enabled, addressing the recipient of
> this email.


More information about the moab-dev mailing list