[petsc-users] Parallel Matrix Causes a Deadlock

Smith, Barry F. bsmith at mcs.anl.gov
Sun Jan 28 13:13:42 CST 2018


  You are assuming that all processes enter this loop an identical number of times

for(loop2=ownbegin;loop2<ownend;loop2++){
		
		GetWaveletCoeff2DSingleRow_x1(FirstWavelets,4,loop2);

otherwise some processes are in the MatAssemblyBegin/End while others never call it. 

Each function in PETSc is labeled as collective or not. Any labeled collective need to be called in the same order the same number of times on all processes.

  In your case just move the MatAssemblyBegin/End out of the function and outside of the loop so it is called exactly once by all processes.


> On Jan 28, 2018, at 1:09 PM, Ali Berk Kahraman <aliberkkahraman at yahoo.com> wrote:
> 
> Hello All,
> 
> 
> The code takes a parallel matrix and calls a function using that matrix. That function fills the specified row of that matrix with the id of the process that part of the matrix belongs in. You can see the short code in the attachment, it is about 80 lines.
> 
> 
> The problem is that the code gets into a deadlock at some point, usually at the last row  of each process except for the last process (greatest pid). I use petsc with configure options "--with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack".
> 
> 
> I am a beginner with MPI, so I do not know what may be causing this. My apologies in advance if this is a very trivial problem.
> 
> 
> Best Regards to All,
> 
> 
> Ali Berk Kahraman
> 
> M.Sc. Student, Mechanical Eng.
> 
> Bogazici Uni., Istanbul, Turkey
> 
> <PetscMPIProblem.c>



More information about the petsc-users mailing list