[petsc-users] Structured multi-block topology

Matthew Knepley knepley at gmail.com
Sat Apr 26 05:04:43 CDT 2014


On Fri, Apr 25, 2014 at 6:02 PM, Mark Lohry <mlohry at gmail.com> wrote:

> On Fri, Apr 25, 2014 at 6:03 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
> > On Fri, Apr 25, 2014 at 2:47 PM, Mark Lohry <mlohry at gmail.com> wrote:
> >>
> >> Most common use case would be from 1 to 15 blocks or so, although I
> >> have occasionally seen need for 100+. This also brings to mind a
> >> question I had about more general connectivity in single-block domains
> >> like a C-mesh, where you have halo dependencies to itself that are not
> >> simply "periodic."
> >
> >
> > Jed and I have discussed this. DMComposite is really not great for even
> > moderate numbers of blocks because it serializes. Also, the real
> performance
> > advantage of structured blocks is saturated by a small number of
> structured
> > cells (32 or so), and thus large structured pieces do not make a lot of
> > sense
> > (this is the same intuition behind the performance of spectral elements).
>
> Huh? Perhaps this is the case in finite elements, but in FV you see
> great performance (and accuracy vs nDOFs) benefits from relatively
> large structured blocks in conjunction with FAS multigrid. As a visual
> aid, I'm thinking of meshes like this which have on the order of 64^3
> - 128^3 cells:
> http://spitfire.princeton.edu/mesh3.png


I need to clarify. I am sure you see great performance from a large
structured mesh.
However, that performance comes from good vectorization and decent blocking
for
the loads. Thus it will not decrease if you made your structured pieces
smaller. You
can do FAS on unstructured meshes just fine (we have an example in PETSc).
The
coarsest meshes would not be structured, but the performance is not a big
issue
there.


>
> > Our
> > recommendation is to have a coarse unstructured mesh with regular
> refinement
> > inside each coarse cell. This is something we could really support well.
> Its
> > not all there yet, but we have all the pieces.
> >
>
> Would you still be able to exchange the fine-mesh data across block
> boundaries? That would be essential.
>

There would be no blocks, so yes that would be easy.

  Thanks,

     Matt


> >>
> >> As far as the difficult task of setting up the communication, is that
> >> even possible to do manually without treating everything as fully
> >> unstructured? What other packages are out there for multi block
> >> structured domains?
> >
> >
> > There is Overture.
> >
> >    Matt
> >
>
>
> Fair enough, thanks. For the time being it looks like I'll have to
> proceed by managing all the parallelization/message passing internally
> and manually filling in petsc vectors for the various solvers.
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140426/9ba63c76/attachment.html>


More information about the petsc-users mailing list