[petsc-users] handling multi physics applications on multiple MPI_Comm

Matthew Knepley knepley at gmail.com
Mon Jul 25 15:43:14 CDT 2016


On Mon, Jul 25, 2016 at 1:34 PM, Manav Bhatia <bhatiamanav at gmail.com> wrote:

> Thanks for your comments, Matt.
>
> I have a fluid-structural application with a really large fluid
> discretization and a really small structural discretization. Due to the
> relative difference in size, I have defined the structural system on only a
> single node, and the fluid system on (say) N nodes.
>
> So far, I have hand-coded a Schur-Complement for a frequency-domain
> analysis that is able to handle the difference in comms.
>
> I am attempting to migrate to the nested matrix constructs for some future
> work, and was looking at the possibility of reusing the same distribution
> of comms. Additionally, I am looking to add additional disciplines and was
> considering the possibility of defining the systems on different comms.
>
> I wasn’t sure if I was creating more problems with this approach than what
> I was trying to solve.
>
> Would you recommend that all objects exist on a global_comm so that there
> is no confusion about these operations?
>

Yes. I think the confusion here is between the problem you are trying to
solve, and the tool for doing it.

Disparate size of subsystems seems to me to be a _load balancing_ problem.
Here you can use data layout to alleviate this.
On the global comm, you can put all the fluid unknowns on ranks 0..N-2, and
the structural unknowns on N-1. You can have
more general splits than that.

IF for some reason in the structural assembly you used a large number of
collective operations (like say did artificial timestepping
to get to some steady state property), then it might make sense to pull out
a subcomm of only the occupied ranks, but only above
1000 procs, and only on a non-BlueGene machine. This is also easily measure
before you do this work.

   Matt


> Thanks,
> Manav
>
>
>
> On Jul 25, 2016, at 3:21 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Mon, Jul 25, 2016 at 1:13 PM, Manav Bhatia <bhatiamanav at gmail.com>
> wrote:
>
>> Hi,
>>
>>     I have a multi physics application with discipline1 defined on comm1
>> and discipline2 on comm2.
>>
>>     My intent is to use the nested matrix for the KSP solver where each
>> diagonal block is provided by the disciplines, and the off-diagonal blocks
>> are defined as shell-matrices with matrix vector products.
>>
>>     I am a bit unclear about how to deal with the case of different set
>> of processors on comm1 and comm2. I have the following questions and would
>> appreciate some guidance:
>>
>> — Would it make sense to define a comm_global as a union of comm1 and
>> comm2 for the MatCreateNest?
>>
>> — The diagonal blocks are available on comm1 and comm2 only. Should
>> MatAssemblyBegin/End for these diagonal blocks be called on comm1 and comm2
>> separately?
>>
>> — What comm should be used for the off-diagonal shell matrices?
>>
>> — Likewise, when calling VecGetSubVector and VecRestoreSubVector to get
>> sub-vectors corresponding to discipline1 (or 2), what comm should these
>> function calls be made?
>>
>
> I would first ask if you have a convincing reason for doing this, because
> it sounds like the genesis of a million programming errors.
>
> All the linear algebra objects would have to be in a global comm that
> contained any subcomms you want to use. I don't
> think it would make sense to define submatrices on subcomms. You can have
> your assembly code run on a subcomm certainly,
> but again this is a tricky business and I find it hard to understand the
> gain.
>
>    Matt
>
>
>> Thanks,
>> Manav
>>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160725/99aaeac2/attachment.html>


More information about the petsc-users mailing list