[petsc-users] Parallel Matrix Causes a Deadlock

Smith, Barry F. bsmith at mcs.anl.gov
Sun Jan 28 13:24:15 CST 2018


  In your code there is NO reason to call the MatAssemblyBegin/End where you do. Just pull it out and call it once. I submit this is the same for any other code. Please explain enough about your code (or send it) that has to call the assembly routines a different number of times. You just pull it above all the calls to MatSetValues().

   Barry


> On Jan 28, 2018, at 1:15 PM, Ali Berk Kahraman <aliberkkahraman at yahoo.com> wrote:
> 
> Hello All,
> 
> 
> My apologies, I have closed this e-mail window and the first thing I read on the manual is "ALL processes that share a matrix MUST call MatAssemblyBegin() and MatAssemblyEnd() the SAME NUMBER of times". So I understand that petsc simply does not support unequal number of assembly calls.
> 
> 
> My question evolves then. I have a problem at hand where I do not know how many calls each process will make to MatAssembly routines. Any suggestions to make this work?
> 
> On 28-01-2018 22:09, Ali Berk Kahraman wrote:
>> Hello All, 
>> 
>> 
>> The code takes a parallel matrix and calls a function using that matrix. That function fills the specified row of that matrix with the id of the process that part of the matrix belongs in. You can see the short code in the attachment, it is about 80 lines. 
>> 
>> 
>> The problem is that the code gets into a deadlock at some point, usually at the last row  of each process except for the last process (greatest pid). I use petsc with configure options "--with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack". 
>> 
>> 
>> I am a beginner with MPI, so I do not know what may be causing this. My apologies in advance if this is a very trivial problem. 
>> 
>> 
>> Best Regards to All, 
>> 
>> 
>> Ali Berk Kahraman 
>> 
>> M.Sc. Student, Mechanical Eng. 
>> 
>> Bogazici Uni., Istanbul, Turkey 
>> 
> 



More information about the petsc-users mailing list