<div dir="ltr">This is a bug in the setup phase for Mat. I did not put in the logic, so I might be missing the point. Here is<div>my outline:</div><div><br></div><div> 1) When MatSetPreallocation_*() is called, we set the NEW_NONZERO_ALLOCATION_ERR option, but ONLY</div>
<div> if the user passed a nonzero nz or nnz.</div><div><br></div><div> 2) During assembly, we check A->B->nonew, and do a collective call if it is 0, meaning NEW_NONZERO_LOCATIONS</div><div><br></div><div>I understand that we would like to avoid collective calls in MatAssembly(), but to guarantee correctness then we would</div>
<div>need to make the option collective, which is not true right now. I see</div><div><br></div><div> Slightly Dangerous: Change SetPreallocation to make the option value collective</div><div><br></div><div> Slightly Slow: Always check for B disassembly</div>
<div><br></div><div>What do you think?</div><div><br></div><div> Matt<br><br><div class="gmail_quote">---------- Forwarded message ----------<br>From: <b class="gmail_sendername">Projet_TRIOU</b> <span dir="ltr"><<a href="mailto:triou@cea.fr">triou@cea.fr</a>></span><br>
Date: Wed, Jan 15, 2014 at 5:57 AM<br>Subject: [petsc-users] MatAssemblyEnd hangs during a parallel calculation with PETSc>3.3<br>To: <a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br><br><br>Hi all,<br>
<br>
I switched from PETSc 3.3 to PETSc 3.4.3 and all was fine except for<br>
one of my test case on 3 processors where one processor was<br>
dealing an empty local part of the global matrix.<br>
<br>
My code hangs just during the call at MatAssemblyEnd:<br>
<br>
ierr = MatMPIAIJSetPreallocation(<u></u>MatricePetsc, PETSC_DEFAULT, d_nnz.addr(), PETSC_DEFAULT, o_nnz.addr());<br>
...<br>
ierr = MatAssemblyEnd(MatricePetsc, MAT_FINAL_ASSEMBLY);<br>
<br>
When I debugged, I notice on the empty processor, that in<br>
src/mat/impls/aij/mpi/mpiaij.<u></u>c:<br>
<br>
if (!((Mat_SeqAIJ*)aij->B->data)-<u></u>>nonew) {<br>
ierr = MPI_Allreduce(&mat->was_<u></u>assembled,&other_disassembled,<u></u>1,MPIU_BOOL,MPI_PROD,<u></u>PetscObjectComm((PetscObject)<u></u>mat));CHKERRQ(ierr);<br>
if (mat->was_assembled && !other_disassembled) {<br>
ierr = MatDisAssemble_MPIAIJ(mat);<u></u>CHKERRQ(ierr);<br>
}<br>
<br>
((Mat_SeqAIJ*)aij->B->data)-><u></u>nonew was 0 on the "empty" processor<br>
and -2 on the 2 others...<br>
<br>
I bypassed my problem with a different call to MatMPIAIJSetPreallocation():<br>
<br>
if (nb_rows==0) // Local matrix is empty<br>
ierr = MatMPISBAIJSetPreallocation(<u></u>MatricePetsc, block_size_, 0, NULL, 0, NULL);<br>
else<br>
ierr = MatMPISBAIJSetPreallocation(<u></u>MatricePetsc, block_size_, PETSC_DEFAULT, d_nnz.addr(), PETSC_DEFAULT, o_nnz.addr());<br>
<br>
Now, it runs well. So I don't know if it is a PETSc regression or if I was abusively<br>
calling MatMPISBAIJSetPreallocation with d_nnz/o_nnz empty arrays....<br>
<br>
Thanks,<br>
<br>
Pierre<br>
<br>
</div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener
</div></div>