[petsc-users] parallel dual porosity

Matthew Knepley knepley at gmail.com
Mon May 27 18:32:38 CDT 2019


On Mon, May 27, 2019 at 7:26 PM Adrian Croucher via petsc-users <
petsc-users at mcs.anl.gov> wrote:

> hi
>
> A couple of years back I was asking questions here about implementing
> "dual porosity" finite volume methods via PETSc (in which flow in
> fractured media is represented by adding extra "matrix" cells nested
> inside the original mesh cells).
>
> At the time I was asking about how to solve the resulting linear
> equations more efficiently (I still haven't worked on that part of it
> yet, so at present it's still just using a naive linear solve which
> doesn't take advantage of the particular sparsity pattern), and about
> how to add the extra cells into the DMPlex mesh, which I figured out how
> to do.
>
> It is working OK except that strong scaling performance is not very
> good, if dual porosity is applied over only part of the mesh. I think
> the reason is that I read the mesh in and distribute it, then add the
> dual porosity cells in parallel on each process. So some processes can
> end up with more cells than others, in which case the load balancing is
> bad.
>
> I'm considering trying to change it so that I add the dual porosity
> cells to the DMPlex in serial, before distribution, to regain decent
> load balancing.
>

I would not do that. It should be much easier, and better from a workflow
standpoint,
to just redistribute in parallel. We now have several test examples that
redistribute
in parallel, for example


https://bitbucket.org/petsc/petsc/src/cd762eb66180d8d1fcc3950bd19a3c1b423f4f20/src/dm/impls/plex/examples/tests/ex1.c#lines-486

Let us know if you have problems.

  Thanks,

     Matt


> To do that, I'd also need to compute the cell centroids in serial (as
> they are often used to identify which cells should have dual porosity
> applied), using DMPlexComputeGeometryFVM(). The geometry vectors would
> then have to be distributed later, I guess using something like
> DMPlexDistributeField().
>
> Should I expect a significant performance hit from calling
> DMPlexComputeGeometryFVM() on the serial mesh compared with doing it (as
> now) on the distributed mesh? It will increase the serial fraction of
> the code but as it's only done once at the start I'm hoping the benefits
> will outweigh the costs.
>
> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email:a.croucher at auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190527/87ec7bbc/attachment.html>


More information about the petsc-users mailing list