[petsc-users] Ghost particles for DMSWARM (or similar)

Matthew Knepley knepley at gmail.com
Fri Aug 2 08:58:10 CDT 2024


On Thu, Aug 1, 2024 at 4:40 PM MIGUEL MOLINOS PEREZ <mmolinos at us.es> wrote:

> Dear all, I am implementing a Molecular Dynamics (MD) code using the
> DMSWARM interface. In the MD simulations we evaluate on each particle
> (atoms) some kind of scalar functional using data from the neighbouring
> atoms. My problem lies in the
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Dear all,
>
> I am implementing a Molecular Dynamics (MD) code using the DMSWARM interface. In the MD simulations we evaluate on each particle (atoms) some kind of scalar functional using data from the neighbouring atoms. My problem lies in the parallel implementation of the model, because sometimes, some of these neighbours lie on a different processor.
>
> This is usually solved by using ghost particles.  A similar approach (with nodes instead) is already implemented for other PETSc mesh structures like DMPlexConstructGhostCells. Unfortunately, I don't see this kind of constructs for DMSWARM. Am I missing something?
>
> I this could be done by applying a buffer region by exploiting the background DMDA mesh that I already use to do domain decomposition. Then using the buffer region of each cell to locate the ghost particles and finally using VecCreateGhost. Is this feasible? Or is there an easier approach using other PETSc functions.
>
>
This is feasible, but it would be good to develop a set of best practices,
since we have been mainly focused on the case of non-redundant particles.
Here is how I think I would do what you want.

1) Add a particle field 'ghost' that identifies ghost vs owned particles. I
think it needs options OWNED, OVERLAP, and GHOST

2) At some interval identify particles that should be sent to other
processes as ghosts. I would call these "overlap particles". The
determination
    seems application specific, so I would leave this determination to the
user right now. We do two things to these particles

    a) Mark chosen particles as OVERLAP

    b) Change rank to process we are sending to

3) Call DMSwarmMigrate with PETSC_FALSE for the particle deletion flag

4) Mark OVERLAP particles as GHOST when they arrive

There is one problem in the above algorithm. It does not allow sending
particles to multiple ranks. We would have to do this
in phases right now, or make a small adjustment to the interface allowing
replication of particles when a set of ranks is specified.

  THanks,

     Matt


> Thank you,
> Miguel
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fTP6CcczHauSge4FV5cI88RqYPhXISVNPhCpwU5IjmOea9z2VEtIlwEoPSlg5aJbEQzO0IQ8CIvAywPYjOAG$  <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fTP6CcczHauSge4FV5cI88RqYPhXISVNPhCpwU5IjmOea9z2VEtIlwEoPSlg5aJbEQzO0IQ8CIvAy4O8e6Q4$ >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240802/4a7b72fa/attachment.html>


More information about the petsc-users mailing list