[petsc-users] Ghost particles for DMSWARM (or similar)
Matthew Knepley
knepley at gmail.com
Fri Aug 2 17:33:02 CDT 2024
On Fri, Aug 2, 2024 at 11:15 AM MIGUEL MOLINOS PEREZ <mmolinos at us.es> wrote:
> Thank you Matt for your time,
>
> What you describe seems to me the ideal approach.
>
> 1) Add a particle field 'ghost' that identifies ghost vs owned particles.
> I think it needs options OWNED, OVERLAP, and GHOST
>
> This means, locally, I need to allocate Nlocal + ghost particles
> (duplicated) for my model?
>
I would do it another way. I would allocate the particles with no overlap
and set them up. Then I would identify the halo particles, mark them as
OVERLAP, call DMSwarmMigrate(), and mark the migrated particles as GHOST,
then unmark the OVERLAP particles. Shoot! That marking will not work since
we cannot tell the difference between particles we received and particles
we sent. Okay, instead of the `ghost` field we need an `owner rank` field.
So then we
1) Setup the non-overlapping particles
2) Identify the halo particles
3) Change the `rank`, but not the `owner rank`
4) Call DMSwarmMigrate()
Now we can identify ghost particles by the `owner rank`
> If that so, how to do the communication between the ghost particles living
> in the rank i and their “real” counterpart in the rank j.
>
> Algo, as an alternative, what about:
> 1) Use an IS tag which contains, for each rank, a list of the global
> index of the neighbors particles outside of the rank.
> 2) Use VecCreateGhost to create a new vector which contains extra local
> space for the ghost components of the vector.
> 3) Use VecScatterCreate, VecScatterBegin, and VecScatterEnd to do the
> transference of data between a vector obtained with
> DMSwarmCreateGlobalVectorFromField
> 4) Do necessary computations using the vectors created with VecCreateGhost
> .
>
This is essentially what Migrate() does. I was trying to reuse the code.
Thanks,
Matt
> Thanks,
> Miguel
>
> On Aug 2, 2024, at 8:58 AM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Thu, Aug 1, 2024 at 4:40 PM MIGUEL MOLINOS PEREZ <mmolinos at us.es>
> wrote:
>
>> Dear all, I am implementing a Molecular Dynamics (MD) code using the
>> DMSWARM interface. In the MD simulations we evaluate on each particle
>> (atoms) some kind of scalar functional using data from the neighbouring
>> atoms. My problem lies in the
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>> Dear all,
>>
>> I am implementing a Molecular Dynamics (MD) code using the DMSWARM interface. In the MD simulations we evaluate on each particle (atoms) some kind of scalar functional using data from the neighbouring atoms. My problem lies in the parallel implementation of the model, because sometimes, some of these neighbours lie on a different processor.
>>
>> This is usually solved by using ghost particles. A similar approach (with nodes instead) is already implemented for other PETSc mesh structures like DMPlexConstructGhostCells. Unfortunately, I don't see this kind of constructs for DMSWARM. Am I missing something?
>>
>> I this could be done by applying a buffer region by exploiting the background DMDA mesh that I already use to do domain decomposition. Then using the buffer region of each cell to locate the ghost particles and finally using VecCreateGhost. Is this feasible? Or is there an easier approach using other PETSc functions.
>>
>>
> This is feasible, but it would be good to develop a set of best practices,
> since we have been mainly focused on the case of non-redundant particles.
> Here is how I think I would do what you want.
>
> 1) Add a particle field 'ghost' that identifies ghost vs owned particles.
> I think it needs options OWNED, OVERLAP, and GHOST
>
> 2) At some interval identify particles that should be sent to other
> processes as ghosts. I would call these "overlap particles". The
> determination
> seems application specific, so I would leave this determination to the
> user right now. We do two things to these particles
>
> a) Mark chosen particles as OVERLAP
>
> b) Change rank to process we are sending to
>
> 3) Call DMSwarmMigrate with PETSC_FALSE for the particle deletion flag
>
> 4) Mark OVERLAP particles as GHOST when they arrive
>
> There is one problem in the above algorithm. It does not allow sending
> particles to multiple ranks. We would have to do this
> in phases right now, or make a small adjustment to the interface allowing
> replication of particles when a set of ranks is specified.
>
> THanks,
>
> Matt
>
>
>> Thank you,
>> Miguel
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bV_q4tHUc2Lno7u6JeojubaRmzQjKJDlVFxnATMOtT6Soncx1isPiFXZBhekxMOgHSdyaz_fLrVfbGZhZdDQ$
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bV_q4tHUc2Lno7u6JeojubaRmzQjKJDlVFxnATMOtT6Soncx1isPiFXZBhekxMOgHSdyaz_fLrVfbLWJTm3C$ >
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bV_q4tHUc2Lno7u6JeojubaRmzQjKJDlVFxnATMOtT6Soncx1isPiFXZBhekxMOgHSdyaz_fLrVfbGZhZdDQ$ <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bV_q4tHUc2Lno7u6JeojubaRmzQjKJDlVFxnATMOtT6Soncx1isPiFXZBhekxMOgHSdyaz_fLrVfbLWJTm3C$ >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240802/0775828e/attachment.html>
More information about the petsc-users
mailing list