[petsc-users] handling multi physics applications on multiple MPI_Comm

Matthew Knepley knepley at gmail.com
Mon Jul 25 16:55:58 CDT 2016


On Mon, Jul 25, 2016 at 2:30 PM, Manav Bhatia <bhatiamanav at gmail.com> wrote:

>
> On Jul 25, 2016, at 3:43 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
> Yes. I think the confusion here is between the problem you are trying to
> solve, and the tool for doing it.
>
> Disparate size of subsystems seems to me to be a _load balancing_ problem.
> Here you can use data layout to alleviate this.
> On the global comm, you can put all the fluid unknowns on ranks 0..N-2,
> and the structural unknowns on N-1. You can have
> more general splits than that.
>
>
> Ok. So, if I do that, then there would still be one comm? If yes, then the
> distribution would be by specifying the number of local fluid dofs on N-1
> to be zero?
>

Yes. If all you want is good load balance, I think this is the best way.

  Thanks,

    Matt


> Sorry that this such is a basic question.
>
>
> IF for some reason in the structural assembly you used a large number of
> collective operations (like say did artificial timestepping
> to get to some steady state property), then it might make sense to pull
> out a subcomm of only the occupied ranks, but only above
> 1000 procs, and only on a non-BlueGene machine. This is also easily
> measure before you do this work.
>
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160725/59a52640/attachment.html>


More information about the petsc-users mailing list