<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Mar 5, 2018 at 8:17 AM, Dave May <span dir="ltr"><<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 5 March 2018 at 09:29, Åsmund Ervik <span dir="ltr"><<a href="mailto:Asmund.Ervik@sintef.no" target="_blank">Asmund.Ervik@sintef.no</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all,<br>
<br>
We have a code that solves the 1D multiphase Euler equations, using some very expensive thermodynamic calls in each cell in each time step. The computational time for different cells varies significantly in the spatial direction (due to different thermodynamic states), and varies slowly from timestep to timestep.<br>
<br>
Currently the code runs in serial, but I would like to use a PETSc DM of some sort to run it in parallell. There will be no linear on nonlinear PETSc solves etc., just a distributed mesh, at least initially. The code is Fortran.<br>
<br>
Now for my question: Is it possible to do dynamic load balancing using a plain 1D DMDA, somehow? There is some mention of this for PCTELESCOPE, but I guess it only works for linear solves? Or could I use an index set or some other PETSc structure? Or do I need to use a 1D DMPLEX?<br></blockquote><div><br></div><div>I don't think TELESCOPE is what you want to use.</div><div><br></div><div>TELESCOPE redistributes a DMDA from one MPI communicator to another MPI communicator with fewer ranks. I would not describe its functionality as "load balancing". Re-distribution could be interpreted as load balancing onto a different communicator, with an equal "load" associated with each point in the DMDA - but that is not what you are after. In addition, I didn't add support within TELESCOPE to re-distribute a 1D DMDA as that use-case almost never arises.</div><div><br></div><div>For a 1D problem such as yours, I would use your favourite graph partitioner (Metis,Parmetis, Scotch) together with your cell based weighting and repartition the data yourself. </div><div><br></div><div>This is not a very helpful comment but I'll make it anyway...</div><div>If your code was in C, or C++, and you didn't want to mess around with any MPI calls at all from your application code, I think you could use the DMSWAM object pretty easily to perform the load balancing. I haven't tried this exact use-case myself, but in principal you could take the output from Metis (which tells you the rank you should move each point in the graph to) and directly shove this info into a SWARM object and then ask it to migrate your data. </div><div>DMSWAM lets you define and migrate (across a communicator) any data type you like - it doesn't have to be a PetscReal, PetscScalar, you can define C structs for example. Unfortunately I didn't have the time to add Fortran support for DMSWAM at the moment.</div></div></div></div></blockquote><div><br></div><div>Okay, then I would say just use Plex.</div><div><br></div><div>Step 1: You can create a 1D Plex easily with DMPlexCreateFromCellList(). Then call DMPlexDistribute().</div><div><br></div><div>Step 2: I would just create a PetscFE() since it defaults to P0, which is what you want. Something like</div><div><br></div><div><div> ierr = PetscFECreateDefault(comm, dim, dim, user->simplex, "vel_", PETSC_DEFAULT, &fe);CHKERRQ(ierr);</div></div><div><br></div><div>Step 3: Set it in your solver</div><div><br></div><div> SNESSetDM(snes, dm)</div><div><br></div><div>Step 4: Test the solve</div><div><br></div><div>If everything works, then I will show you how to make or set a partition and redistribute.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Cheers,</div><div> Dave</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>Thanks,</div><div> Dave</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
If the latter, how do I make a 1D DMPLEX? All the variables are stored in cell centers (collocated), so it's a completely trivial "mesh". I tried reading the DMPLEX manual, and looking at examples, but I'm having trouble penetrating the FEM lingo / abstract nonsense.<br>
<br>
Best regards,<br>
Åsmund<br>
</blockquote></div><br></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div>
</div></div>