[petsc-users] handling multi physics applications on multiple MPI_Comm

Manav Bhatia bhatiamanav at gmail.com
Mon Jul 25 16:30:20 CDT 2016


> On Jul 25, 2016, at 3:43 PM, Matthew Knepley <knepley at gmail.com> wrote:
> 
> Yes. I think the confusion here is between the problem you are trying to solve, and the tool for doing it.
> 
> Disparate size of subsystems seems to me to be a _load balancing_ problem. Here you can use data layout to alleviate this.
> On the global comm, you can put all the fluid unknowns on ranks 0..N-2, and the structural unknowns on N-1. You can have
> more general splits than that.
> 

Ok. So, if I do that, then there would still be one comm? If yes, then the distribution would be by specifying the number of local fluid dofs on N-1 to be zero? 

Sorry that this such is a basic question. 


> IF for some reason in the structural assembly you used a large number of collective operations (like say did artificial timestepping
> to get to some steady state property), then it might make sense to pull out a subcomm of only the occupied ranks, but only above
> 1000 procs, and only on a non-BlueGene machine. This is also easily measure before you do this work.
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160725/c3de904b/attachment-0001.html>


More information about the petsc-users mailing list