[petsc-users] MatAssemblyEnd taking too long

Jed Brown jed at jedbrown.org
Thu Aug 20 12:54:38 CDT 2020


Matthew Knepley <knepley at gmail.com> writes:

> On Thu, Aug 20, 2020 at 11:09 AM Manav Bhatia <bhatiamanav at gmail.com> wrote:
>
>>
>>
>> On Aug 20, 2020, at 8:31 AM, Stefano Zampini <stefano.zampini at gmail.com>
>> wrote:
>>
>> Can you add a MPI_Barrier before
>>
>> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>>
>>
>> With a MPI_Barrier before this function call:
>> —  three of the processes have already hit this barrier,
>> —  the other 5 are inside MatStashScatterGetMesg_Private ->
>> MatStashScatterGetMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3
>> processes)

This is not itself evidence of inconsistent state.  You can use

  -build_twosided allreduce

to avoid the nonblocking sparse algorithm.

>
> Okay, you should run this with -matstash_legacy just to make sure it is not
> a bug in your MPI implementation. But it looks like
> there is inconsistency in the parallel state. This can happen because we
> have a bug, or it could be that you called a collective
> operation on a subset of the processes. Is there any way you could cut down
> the example (say put all 1s in the matrix, etc) so
> that you could give it to us to run?


More information about the petsc-users mailing list