[petsc-users] Vertex only unstructured mesh with DMPlex and DMSWARM

Patrick Sanan patrick.sanan at gmail.com
Wed Feb 12 09:29:13 CST 2020


DMSwarm has DMSwarmMigrate() which might be the closest thing to
DMPlexDistribute().
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMSWARM/DMSwarmMigrate.html
Of course, it's good to create particles in parallel when possible.

Am Mi., 12. Feb. 2020 um 12:56 Uhr schrieb Hill, Reuben <
reuben.hill10 at imperial.ac.uk>:

> I'm a new Firedrake developer working on getting my head around PETSc. As
> far as I'm aware, all our PETSc calls are done via petsc4py.
>
> I'm after general help and advise on two fronts:
>
>
> *1*:
>
> I’m trying to represent a point cloud as a vertex-only mesh in an attempt
> to play nicely with the firedrake stack. If I try to do this using
> firedrake I manage to cause a PETSc seg fault error at the point of calling
>
> PETSc.DMPlex().createFromCellList(dim, cells, coords, comm=comm)
>
>
> with dim=0, cells=[[0]], coords=[[1., 2.]] and comm=COMM_WORLD.
>
> Output:
>
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see
> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS
> X to find memory corruption errors
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and
> run
> [0]PETSC ERROR: to get more information on the crash.
> application called MPI_Abort(MPI_COMM_WORLD, 50152059) - process 0
>
>
> I’m now looking into getting firedrake to make a DMSWARM which seems to
> have been designed for doing something closer to this and allows nice
> things such as being able to make the particles (for me - the vertices of
> the mesh) move and jump between MPI ranks a-la particle in cell. I note
> DMSwarm docks don't suggest there is an equivalent of the plex.distribute()
> method in petsc4py (which I believe calls DMPlexDistribute) that a DMPlex
> has. In firedrake we create empty DMPlexes on each rank except 0, then call
> plex.distribute() to take care of parallel partitioning. How, therefore,
> should I meant to go about distributing particles across MPI ranks?
>
>
> *2*:
>
> I'm aware these questions may be very naive. Any advice for learning about
> relevant bits of PETSc for would be very much appreciated. I'm in chapter 2
> of the excellent manual (
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf) and I'm also
> attempting to understand the DMSwarm example. I presume in oder to
> understand DMSWARM I really ought to understand DMs more generally (i.e.
> read the whole manual)?
>
>
> Many thanks
> Reuben Hill
>
>
>    1.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200212/976d4811/attachment-0001.html>


More information about the petsc-users mailing list