MPI-layout of PETSc

Rolf Kuiper kuiper at mpia-hd.mpg.de
Sat Jul 4 16:33:33 CDT 2009


No problem, here is the code:

// the numbers of processors per direction are (int) x_procs, y_procs,  
z_procs respectively
// (no parallelization in direction 'dir' means dir_procs = 1)

MPI_Comm NewComm;
int MPI_Rank, NewRank, x,y,z;

// get rank from MPI ordering:
MPI_Comm_rank(MPI_COMM_WORLD, &MPI_Rank);

// calculate coordinates of cpus in MPI ordering:
x = MPI_rank / (z_procs*y_procs);
y = (MPI_rank % (z_procs*y_procs)) / z_procs;
z = (MPI_rank % (z_procs*y_procs)) % z_procs;

// set new rank according to PETSc ordering:
NewRank = z*y_procs*x_procs + y*x_procs + x;

// create communicator with new ranks according to PETSc ordering:
MPI_Comm_split(PETSC_COMM_WORLD, 1, NewRank, &NewComm);

// override the default communicator (was MPI_COMM_WORLD as default)
PETSC_COMM_WORLD = NewComm;

I hope, this will be useful for some of you.

Ciao,
Rolf

-------------------------------------------------------
  Rolf Kuiper
  Max-Planck Institute for Astronomy

  Königstuhl 17
  69117 Heidelberg

  Office A5, Elsässer Labor
  Phone: 0049 (0)6221 528 350
  Mail: kuiper at mpia.de
  Homepage: http://www.mpia.de/~kuiper
-------------------------------------------------------

Am 04.07.2009 um 19:24 schrieb Barry Smith:
>
>   Send us the code to do the conversion and we'll include as a  
> utility.
>
>   Barry
>
> On Jul 4, 2009, at 6:08 AM, Rolf Kuiper wrote:
>
>> Thanks Barry!
>> It's working. But by the way: You simply should offer such a second  
>> communicator inside the PETSc-library.
>>
>> Thanks for all your help, the support we got from this mailing list  
>> is amazing,
>> Rolf
>>
>>
>> Am 04.07.2009 um 01:44 schrieb Barry Smith:
>>>
>>> Use MPI_Comm_split() with the same color for all processors, then  
>>> use the second integer argument to indicate the new rank you want  
>>> for the process.
>>> Choice the new rank so its x,y coordinate in the logical grid will  
>>> match the y,x coordinate in the cartesian grid.
>>>
>>> Barry
>>>
>>> On Jul 3, 2009, at 12:09 PM, Rolf Kuiper wrote:
>>>
>>>> Hi Barry,
>>>>
>>>> I tried that already with:
>>>> First way by copying:
>>>> MPI_Comm_dup(PETSC_COMM_WORLD, &MyComm);
>>>>
>>>> Second way by creating:
>>>> int dims[3] = {0,0,0};
>>>> int ndims=3;
>>>> MPI_Dims_create(NumberOfProcessors, ndims, dims);
>>>> int false = 0; int true = 1;
>>>> int periods[3] = { false, false, true };
>>>> int reorder = true;
>>>> MPI_Comm MyComm;
>>>> MPI_Cart_create(PETSC_COMM_WORLD, ndims, dims, periods, reorder,  
>>>> &MyComm);
>>>>
>>>> in the end then:
>>>> PETSC_COMM_WORLD = MyComm;
>>>>
>>>> I test the MyComm with MPI_Topo_test(); and it is cartesian, yes.
>>>> I can the coordinates of the cpus with MPI_Cart_coords(MyComm,  
>>>> LocalRank, ndims, coords); , but I found no way to set/rearrange  
>>>> these coordinates.
>>>>
>>>> Do you can help me in that case or have I to ask a MPI-support?
>>>>
>>>> Thanks for all,
>>>> Rolf
>>>>
>>>>
>>>> Am 03.07.2009 um 17:56 schrieb Barry Smith:
>>>>>
>>>>> In designing the PETSc DA I did not (by ignorance) follow the  
>>>>> layout approach of the MPI cartesian MPI_Cart_create (that gives  
>>>>> the first local cpus first in the y-direction).
>>>>> I had it put the first cpus in the x-direction.
>>>>>
>>>>> What you need to do is create a new communicator that changes  
>>>>> the order of the processors so that when used by the PETSc DA  
>>>>> they lie out in the ordering that matches the other code. You  
>>>>> will need to read up on the MPI_Cart stuff.
>>>>>
>>>>> To change PETSC_COMM_WORLD you simply set PETSC_COMM_WORLD =  
>>>>> yournewcom BEFORE calling PetscInitialize().
>>>>>
>>>>> Barry
>>>>>
>>>>> On Jul 3, 2009, at 3:52 AM, Rolf Kuiper wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Am 30.06.2009 um 02:24 schrieb Barry Smith:
>>>>>>>
>>>>>>> On Jun 29, 2009, at 7:07 PM, Rolf Kuiper wrote:
>>>>>>>
>>>>>>>> Hi PETSc users,
>>>>>>>>
>>>>>>>> I ran into trouble in combining my developed PETSc  
>>>>>>>> application with another code (based on another library  
>>>>>>>> called "ArrayLib").
>>>>>>>> The problem is the parallel layout for MPI, e.g. in 2D with 6  
>>>>>>>> cpus the ArrayLib code gives the names/ranks of the local  
>>>>>>>> cpus first in y-direction, than in x (from last to first, in  
>>>>>>>> the same way the MPI arrays are called, like 3Darray[z][y][x]):
>>>>>>>>
>>>>>>>> y
>>>>>>>> ^
>>>>>>>> | 2-4-6
>>>>>>>> | 1-3-5
>>>>>>>> |--------> x
>>>>>>>>
>>>>>>>> If I call DACreate() from PETSc, it will assume an ordering  
>>>>>>>> according to names/ranks first set in x-direction, than in y:
>>>>>>>>
>>>>>>>> y
>>>>>>>> ^
>>>>>>>> | 4-5-6
>>>>>>>> | 1-2-3
>>>>>>>> |--------> x
>>>>>>>>
>>>>>>>> Of course, if I now communicate the boundary values, I mix up  
>>>>>>>> the domain (build by the other program).
>>>>>>>>
>>>>>>>> Is there a possibility / a flag to set the name of the ranks?
>>>>>>>> Due to the fact that my application is written and working in  
>>>>>>>> curvilinear coordinates and not in cartesian, I cannot just  
>>>>>>>> switch the directions.
>>>>>>>
>>>>>>> What we recommend in this case is to just change the meaning  
>>>>>>> of x, y, and z when you use the PETSc DA.  This does mean  
>>>>>>> changing your code that uses the PETSc DA.
>>>>>>
>>>>>> The code is used as a module for many codes, so I would prefer  
>>>>>> to not change the code (and the meaning of directions, that's  
>>>>>> not user-friendly), but 'just' change the communicator.
>>>>>>
>>>>>>> I do not understand why curvilinear coordinates has anything  
>>>>>>> to do with it. Another choice is to create a new MPI  
>>>>>>> communicator that has the different ordering of the ranks of  
>>>>>>> the processors and then using that comm to create the PETSc DA  
>>>>>>> objects; then you would not need to change your code that  
>>>>>>> calls PETSc.
>>>>>>
>>>>>> I tried some time before to use the PetscSetCommWorld()  
>>>>>> routine, but I can't find it anymore, how can I set a new  
>>>>>> communicator in PETSc3.0?
>>>>>> The communicator, I want to use, is the MPI_COMM_WORLD, which  
>>>>>> takes the first described ordering.
>>>>>> Now I read that the MPI_COMM_WORLD is the default communicator  
>>>>>> for PETSc. But why is the ordering than different?
>>>>>>
>>>>>> Sorry for all this question, but (as you can see) I really  
>>>>>> don't understand this comm problem at the moment,
>>>>>> Thanks for all,
>>>>>> Rolf
>>>>>>
>>>>>>> Unfortunately PETSc doesn't have any way to flip how the DA  
>>>>>>> handles the layout automatically.
>>>>>>>
>>>>>>> Barry
>>>>>>>
>>>>>>>>
>>>>>>>> Thanks a lot for your help,
>>>>>>>> Rolf
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20090704/28fceefe/attachment-0001.htm>


More information about the petsc-users mailing list