[petsc-users] Load balancing / redistributing a 1D DM

Matthew Knepley knepley at gmail.com
Mon Mar 5 07:29:37 CST 2018


On Mon, Mar 5, 2018 at 8:17 AM, Dave May <dave.mayhem23 at gmail.com> wrote:

>
>
> On 5 March 2018 at 09:29, Åsmund Ervik <Asmund.Ervik at sintef.no> wrote:
>
>> Hi all,
>>
>> We have a code that solves the 1D multiphase Euler equations, using some
>> very expensive thermodynamic calls in each cell in each time step. The
>> computational time for different cells varies significantly in the spatial
>> direction (due to different thermodynamic states), and varies slowly from
>> timestep to timestep.
>>
>> Currently the code runs in serial, but I would like to use a PETSc DM of
>> some sort to run it in parallell. There will be no linear on nonlinear
>> PETSc solves etc., just a distributed mesh, at least initially. The code is
>> Fortran.
>>
>> Now for my question: Is it possible to do dynamic load balancing using a
>> plain 1D DMDA, somehow? There is some mention of this for PCTELESCOPE, but
>> I guess it only works for linear solves? Or could I use an index set or
>> some other PETSc structure? Or do I need to use a 1D DMPLEX?
>>
>
> I don't think TELESCOPE is what you want to use.
>
> TELESCOPE redistributes a DMDA from one MPI communicator to another MPI
> communicator with fewer ranks. I would not describe its functionality as
> "load balancing". Re-distribution could be interpreted as load balancing
> onto a different communicator, with an equal "load" associated with each
> point in the DMDA - but that is not what you are after. In addition, I
> didn't add support within TELESCOPE to re-distribute a 1D DMDA as that
> use-case almost never arises.
>
> For a 1D problem such as yours, I would use your favourite graph
> partitioner (Metis,Parmetis, Scotch) together with your cell based
> weighting and repartition the data yourself.
>
> This is not a very helpful comment but I'll make it anyway...
> If your code was in C, or C++, and you didn't want to mess around with any
> MPI calls at all from your application code, I think you could use the
> DMSWAM object pretty easily to perform the load balancing. I haven't tried
> this exact use-case myself, but in principal you could take the output from
> Metis (which tells you the rank you should move each point in the graph to)
> and directly shove this info into a SWARM object and then ask it to migrate
> your data.
> DMSWAM lets you define and migrate (across a communicator) any data type
> you like - it doesn't have to be a PetscReal, PetscScalar, you can define C
> structs for example.  Unfortunately I didn't have the time to add Fortran
> support for DMSWAM at the moment.
>

Okay, then I would say just use Plex.

Step 1: You can create a 1D Plex easily with DMPlexCreateFromCellList().
Then call DMPlexDistribute().

Step 2: I would just create a PetscFE() since it defaults to P0, which is
what you want. Something like

  ierr = PetscFECreateDefault(comm, dim, dim, user->simplex, "vel_",
PETSC_DEFAULT, &fe);CHKERRQ(ierr);

Step 3: Set it in your solver

  SNESSetDM(snes, dm)

Step 4: Test the solve

If everything works, then I will show you how to make or set a partition
and redistribute.

  Thanks,

     Matt

Cheers,
>   Dave
>
>
>
>
>
>
> Thanks,
>   Dave
>
>
>>
>> If the latter, how do I make a 1D DMPLEX? All the variables are stored in
>> cell centers (collocated), so it's a completely trivial "mesh". I tried
>> reading the DMPLEX manual, and looking at examples, but I'm having trouble
>> penetrating the FEM lingo / abstract nonsense.
>>
>> Best regards,
>> Åsmund
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180305/dea2062b/attachment-0001.html>


More information about the petsc-users mailing list