[petsc-users] Parallel Matrix Causes a Deadlock

Ali Berk Kahraman aliberkkahraman at yahoo.com
Sun Jan 28 13:15:13 CST 2018

Hello All,

My apologies, I have closed this e-mail window and the first thing I 
read on the manual is "ALL processes that share a matrix MUST call 
and MatAssemblyEnd 
the SAME NUMBER of times". So I understand that petsc simply does not 
support unequal number of assembly calls.

My question evolves then. I have a problem at hand where I do not know 
how many calls each process will make to MatAssembly routines. Any 
suggestions to make this work?

On 28-01-2018 22:09, Ali Berk Kahraman wrote:
> Hello All,
> The code takes a parallel matrix and calls a function using that 
> matrix. That function fills the specified row of that matrix with the 
> id of the process that part of the matrix belongs in. You can see the 
> short code in the attachment, it is about 80 lines.
> The problem is that the code gets into a deadlock at some point, 
> usually at the last row  of each process except for the last process 
> (greatest pid). I use petsc with configure options "--with-cc=gcc 
> --with-cxx=g++ --with-fc=gfortran --download-mpich 
> --download-fblaslapack".
> I am a beginner with MPI, so I do not know what may be causing this. 
> My apologies in advance if this is a very trivial problem.
> Best Regards to All,
> Ali Berk Kahraman
> M.Sc. Student, Mechanical Eng.
> Bogazici Uni., Istanbul, Turkey

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180128/602450ba/attachment.html>

More information about the petsc-users mailing list