[petsc-users] MatAssemblyEnd hangs during a parallel calculation with PETSc>3.3

Projet_TRIOU triou at cea.fr
Wed Jan 15 05:57:57 CST 2014


Hi all,

I switched from PETSc 3.3 to PETSc 3.4.3 and all was fine except for
one of my test case on 3 processors where one processor was
dealing an empty local part of the global matrix.

My code hangs just during the call at MatAssemblyEnd:

ierr = MatMPIAIJSetPreallocation(MatricePetsc, PETSC_DEFAULT, 
d_nnz.addr(), PETSC_DEFAULT, o_nnz.addr());
...
ierr = MatAssemblyEnd(MatricePetsc, MAT_FINAL_ASSEMBLY);

When I debugged, I notice on the empty processor, that in
src/mat/impls/aij/mpi/mpiaij.c:

   if (!((Mat_SeqAIJ*)aij->B->data)->nonew) {
     ierr = 
MPI_Allreduce(&mat->was_assembled,&other_disassembled,1,MPIU_BOOL,MPI_PROD,PetscObjectComm((PetscObject)mat));CHKERRQ(ierr);
     if (mat->was_assembled && !other_disassembled) {
       ierr = MatDisAssemble_MPIAIJ(mat);CHKERRQ(ierr);
     }

((Mat_SeqAIJ*)aij->B->data)->nonew was 0 on the "empty" processor
and -2 on the 2 others...

I bypassed my problem with a different call to MatMPIAIJSetPreallocation():

       if (nb_rows==0) // Local matrix is empty
          ierr = MatMPISBAIJSetPreallocation(MatricePetsc, block_size_, 
0, NULL, 0, NULL);
       else
          ierr = MatMPISBAIJSetPreallocation(MatricePetsc, block_size_, 
PETSC_DEFAULT, d_nnz.addr(), PETSC_DEFAULT, o_nnz.addr());

Now, it runs well. So I don't know if it is a PETSc regression or if I 
was abusively
calling MatMPISBAIJSetPreallocation with d_nnz/o_nnz empty arrays....

Thanks,

Pierre



More information about the petsc-users mailing list