[petsc-dev] Unevenly distributed MatNest and FieldSplit

Pierre Jolivet pierre.jolivet at enseeiht.fr
Thu Jun 13 02:31:23 CDT 2019


OK, turns out this had not much to do with PETSc itself.
I remember having told you (either here or on Bitbucket, I can’t remember the thread) that some LU packages were behaving weirdly with empty local matrices.
It was said that something should be done about this, but apparently it hasn’t.

So, just changing all my 
-fieldsplit_fluid_interior_fieldsplit_[x,y,z]velocity_sub_pc_factor_mat_solver_type mumps
to
-fieldsplit_fluid_interior_fieldsplit_[x,y,z]velocity_sub_pc_factor_mat_solver_type [petsc|superlu]
fixed the SNES behaviour.

Thank you,
Pierre

> On 12 Jun 2019, at 12:16 PM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
> 
> 
>  mpiexec -n <n>  ./myprogrm over options -log_trace > afile
> 
>  grep "\[0\]" afile > process0
>  grep "\[1\]" afile > process1 
> 
>  paste process0 process1 | more
> 
>  For the two processes  pick ones that take the different paths in the code. 
> 
>  Almost for sure something is defined on a sub communicator where the group of processes in that sub communicator are getting "behind" the other processes caught by their own MPI reduction. Hopefully the above will make clear where the two sets of processes branch.
> 
>   Good luck,
> 
>   Barry
> 
> 
>> On Jun 12, 2019, at 1:44 AM, Pierre Jolivet via petsc-dev <petsc-dev at mcs.anl.gov> wrote:
>> 
>> Hello,
>> We are using a SNES to solve a steady-state FSI problem.
>> The operator is defined as a MatNest with multiple fields.
>> Some submatrices are entirely defined on a subset of processes (but they are still created on the same communicator as the MatNest).
>> The preconditioner is defined as a FieldSplit.
>> 
>> During the first call to KSPSolve within SNESSolve, I’m getting this (with debugging turned on):
>> [0]PETSC ERROR: VecGetSubVector() line 1243 in /Users/jolivet/Documents/repositories/petsc/src/vec/vec/interface/rvector.c MPI_Allreduce() called in different locations (code lines) on different processors
>> [2]PETSC ERROR: VecNorm_MPI() line 57 in /Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c MPI_Allreduce() called in different locations (code lines) on different processors
>> [1]PETSC ERROR: VecNorm_MPI() line 57 in /Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c MPI_Allreduce() called in different locations (code lines) on different processors
>> 
>> As you may have guessed, process 0 (resp. 1–2) is where the structure (resp. fluid) is handled.
>> I’m attaching both stacks. I don’t see what could trigger such an error from my side, since everything is delegated to PETSc in the SNESSolve.
>> Is there an easy way to debug this?
>> Is there some way to dump _everything_ related to a KSP (Mat + PC + ISes) for “easier” debugging?
>> 
>> Thank you,
>> Pierre
>> 
>> <stack_1--2.txt><stack_0.txt>
> 



More information about the petsc-dev mailing list