[petsc-dev] Fwd: [petsc-users] MatAssemblyEnd hangs during a parallel calculation with PETSc>3.3

Matthew Knepley knepley at gmail.com
Wed Jan 15 11:20:37 CST 2014


This is a bug in the setup phase for Mat. I did not put in the logic, so I
might be missing the point. Here is
my outline:

  1) When MatSetPreallocation_*() is called, we set the
NEW_NONZERO_ALLOCATION_ERR option, but ONLY
       if the user passed a nonzero nz or nnz.

  2) During assembly, we check A->B->nonew, and do a collective call if it
is 0, meaning NEW_NONZERO_LOCATIONS

I understand that we would like to avoid collective calls in MatAssembly(),
but to guarantee correctness then we would
need to make the option collective, which is not true right now. I see

  Slightly Dangerous: Change SetPreallocation to make the option value
collective

  Slightly Slow: Always check for B disassembly

What do you think?

   Matt

---------- Forwarded message ----------
From: Projet_TRIOU <triou at cea.fr>
Date: Wed, Jan 15, 2014 at 5:57 AM
Subject: [petsc-users] MatAssemblyEnd hangs during a parallel calculation
with PETSc>3.3
To: petsc-users at mcs.anl.gov


Hi all,

I switched from PETSc 3.3 to PETSc 3.4.3 and all was fine except for
one of my test case on 3 processors where one processor was
dealing an empty local part of the global matrix.

My code hangs just during the call at MatAssemblyEnd:

ierr = MatMPIAIJSetPreallocation(MatricePetsc, PETSC_DEFAULT, d_nnz.addr(),
PETSC_DEFAULT, o_nnz.addr());
...
ierr = MatAssemblyEnd(MatricePetsc, MAT_FINAL_ASSEMBLY);

When I debugged, I notice on the empty processor, that in
src/mat/impls/aij/mpi/mpiaij.c:

  if (!((Mat_SeqAIJ*)aij->B->data)->nonew) {
    ierr = MPI_Allreduce(&mat->was_assembled,&other_disassembled,
1,MPIU_BOOL,MPI_PROD,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
    if (mat->was_assembled && !other_disassembled) {
      ierr = MatDisAssemble_MPIAIJ(mat);CHKERRQ(ierr);
    }

((Mat_SeqAIJ*)aij->B->data)->nonew was 0 on the "empty" processor
and -2 on the 2 others...

I bypassed my problem with a different call to MatMPIAIJSetPreallocation():

      if (nb_rows==0) // Local matrix is empty
         ierr = MatMPISBAIJSetPreallocation(MatricePetsc, block_size_, 0,
NULL, 0, NULL);
      else
         ierr = MatMPISBAIJSetPreallocation(MatricePetsc, block_size_,
PETSC_DEFAULT, d_nnz.addr(), PETSC_DEFAULT, o_nnz.addr());

Now, it runs well. So I don't know if it is a PETSc regression or if I was
abusively
calling MatMPISBAIJSetPreallocation with d_nnz/o_nnz empty arrays....

Thanks,

Pierre




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20140115/35d3d828/attachment.html>


More information about the petsc-dev mailing list