[petsc-users] MatAssemblyEnd taking too long
Matthew Knepley
knepley at gmail.com
Thu Aug 20 10:12:00 CDT 2020
On Thu, Aug 20, 2020 at 11:09 AM Manav Bhatia <bhatiamanav at gmail.com> wrote:
>
>
> On Aug 20, 2020, at 8:31 AM, Stefano Zampini <stefano.zampini at gmail.com>
> wrote:
>
> Can you add a MPI_Barrier before
>
> ierr = MatAssemblyBegin(aij->A,mode);CHKERRQ(ierr);
>
>
> With a MPI_Barrier before this function call:
> — three of the processes have already hit this barrier,
> — the other 5 are inside MatStashScatterGetMesg_Private ->
> MatStashScatterGerMesg_BTS -> MPI_Waitsome(2 processes)/MPI_Waitall(3
> processes)
>
Okay, you should run this with -matstash_legacy just to make sure it is not
a bug in your MPI implementation. But it looks like
there is inconsistency in the parallel state. This can happen because we
have a bug, or it could be that you called a collective
operation on a subset of the processes. Is there any way you could cut down
the example (say put all 1s in the matrix, etc) so
that you could give it to us to run?
Thanks,
Matt
> Also, in order to assess where the issue is, we need to see the values
> (per rank) of
>
> ((Mat_SeqAIJ*)aij->B->data)->nonew
> mat->was_assembled
> aij->donotstash
> mat->nooffprocentries
>
>
> I am working to get this information.
>
> Another question: is this the first matrix assembly of the code?
>
>
> Yes, this is the first matrix assembly in the code.
>
> If you change to pc_none, do you get the same issue?
>
>
> Yes, with "-pc_type none” the code is stuck at the same spot.
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200820/d248ff3f/attachment.html>
More information about the petsc-users
mailing list