[MPICH] about C++

Philip Sydney Lavers psl02 at uow.edu.au
Tue Oct 18 20:44:06 CDT 2005


Hello,

>  Does anyone use c++ in parallel computing? 

Yes, I use C++ routinely but I suggest - do not try to
"translate" C or Fortran code. That is a waste of time and
creates spaghetti. I find it worthwhile to use C++ "from the
ground up" for it's considerable advantages. 

>I am working with a big
>project in which the main program is written in C. I am going
to rewrite
>the program in c++, however, the program is too big, before any
>modification, I need to know the following things
>
>1) I am going to use std::valarray or a third-party library
like blitz++
>as the data container, I am not sure if data stored in such
containers
>are continuous or not. If not, how can I MPI_Send the buffer
>
>2) If I need an extra buffer to store the data in
std::valarray before
>MPI_Send, so I need to reconstruct the valarray after the data is
>received. Is it a time-consuming job?
>
>I need you experience.
>
>Thanks in advance.


Here are some snippets of code that show how I use vectors to
store data and how that is then distributed and recollected
after modification in the parallel processes. The objects I am
using are vortices which are derived from a basic class Point
which basically has position in 3D space. Vortices have many
attributes such as spin etc. so are quite complicated. The
programmes simulate the dynamic behaviour of systems of
hundreds of interacting vortices and thousands of
(interacting) Pins (another object inheriting from Point).



#include <mpi.h>
//#include <mpicxx.h>
#define MPE_GRAPHICS
#include <mpe.h>
#include <algorithm>
#include <vector>
#include <stdexcept>

#include <time.h>
#include <iostream>
#include <fstream>

#include <string>
#include <sstream>

#include "nr.h"
#include "nrutil.h"
#include "physics.h"
#include "moldyn.h"

using namespace MPI;
using namespace std;
using namespace NR;

#define NUMBINS 100

...................................

//MPI stuff:
                  int n, myid, numprocs ;
    double startwtime = 0.0, endwtime;
    int  namelen;
    char processor_name[MPI_MAX_PROCESSOR_NAME];

   Init(argc,argv);
    numprocs = COMM_WORLD.Get_size();
    myid     = COMM_WORLD.Get_rank();
   Get_processor_name(processor_name,namelen);

    cout << "Process " << myid << " of " << numprocs << " is
on " << processor_name << endl;


..............................
   int numvorts, increment = atoi(argv[3]),
TOTVORTS=atoi(argv[2]), i, j, k, dummy, decreased, 
numruns=atoi(argv[5]), run, count_left, count_right
, num_field_vorts, pad_vorts ;

                int batchsize;                         
//number of particles handled by each process

                int PHASE, ramp ;                       //
controls for increasing or decreasing magnetic field

                double timestep = strtod(argv[6],NULL);
                int Theta_x, Theta_y ;
                int runs_per_frame ;            //for the
"movies" of the motion and trajectories
                vector<Vortex> vorts, boundvorts;
                int status, spin;               //vorts can be
anihilated by status = 0, antivorts will have spin -1
                vector<Pin> pins;
                int numpins ;
                Pin* pinp;
                Vortex* vortp;
                double frontcoeff, slabcoeffA, slabcoeffB,
stochcoeff, f0, fp, fv ;  
                vector <Vec> forces, forcesnb;
                Vec     *forcep, *forcepnb, av_force ;
                double x, y, z , xv, yv, newx, newy,
checkcoeff, highcheck, lowcheck ;
                double gamma_x, gamma_y ;

....................................................

if(PHASE==0){
for (dummy=increment; dummy <= TOTVORTS; dummy+=increment)
{ //outer ramping up loop
numvorts=dummy;      //use dummy to be sure numvorts not
incremented after test <=TOTVORTS

batchsize=numvorts/numprocs;
cout<<"ramp = "<< ramp<< " numvorts = "<< numvorts << "
batchsize = "<< batchsize<< endl;
//make buffers for vortex locations on all processes:

vortslocbuff = new double[2*numvorts] ;
vortslocbuff_send = new double[2*batchsize];


// make the vortices on root, roughly equal density on either
margin:

if(myid==0){            for (i=ramp*increment; i<numvorts;
i++)         //build the vortices in random locations
                {
                        x =ran2(seed)*(x1-x0+x3-x2)+x0;
if(x>x1)x+=(x2-x1);
                        y =ran2(seed)*(y1-y0)+y0; 
                        z = 0.0;
                        vortp = new Vortex(x, y, z, i, status,
spin, lambda);
                        vorts.push_back(*vortp);
                        delete vortp;
                }



//fill the buffer on root:

                for (i=0; i<numvorts; i++)
                {
                        vortslocbuff[2*i]=vorts[i].where_x()
;vortslocbuff[2*i+1]=vorts[i].where_y() ;
                }
 
        }

//Broadcast the locations:

MPI::COMM_WORLD.Bcast(vortslocbuff,2*numvorts ,DOUBLE,0); 

// build the vorts on the other processes:
if(myid==0)COMM_WORLD.Barrier();
if(myid!=0){

                                for (i=ramp*increment;
i<numvorts; i++)
                                        {
                                                x
=vortslocbuff[2*i];
                                                y
=vortslocbuff[2*i+1]; z = 0.0;
                                                vortp = new
Vortex(x, y, z, i, status, spin, lambda);
                                               
vorts.push_back(*vortp);
                                                delete vortp;
                                        }

COMM_WORLD.Barrier();
        }

................................

//run simulation timestep and integration


..............................

//Synchronise:
COMM_WORLD.Barrier();
// Gather the locations and relocate the vortices:

MPI::COMM_WORLD.Allgather(vortslocbuff_send, 2*batchsize,
DOUBLE,vortslocbuff,2*batchsize ,DOUBLE);
for (i=0; i<numvorts; i++)
        {

        vorts[i].set_x(vortslocbuff[2*i]);
        vorts[i].set_y(vortslocbuff[2*i + 1]);
 
        }

..........................................
delete [] vortslocbuff;
delete [] vortslocbuff_send;


etc.etc.


I hope this gives you some pointers (no pun intended)

regards,

Philip Lavers




More information about the mpich-discuss mailing list