Non repetability issue and difference between 2.3.0 and 2.3.3

Etienne PERCHAT etienne.perchat at
Wed Sep 24 11:21:23 CDT 2008

Dear Petsc users,


I come again with my comparisons between v2.3.0 and v2.3.3p8.


I face a non repeatability issue with v2.3.3 that I didn't have with

I have read the exchanges made in March on a related subject but in my
case it is at the first linear system solution that two successive runs



It happens when the number of processors used is greater than 2, even on
a standard PC.

I am solving MPIBAIJ symmetric systems with the Conjugate Residual
method preconditioned ILU(1) and Block Jacobi  between subdomains. 

This system is the results of a FE assembly on an unstructured mesh.


I made all the runs using -log_summary and -ksp_truemonitor.


Starting with the same initial matrix and RHS, each run using 2.3.3p8
provides slightly different results while we obtain exactly the same
solution with v2.3.0.


With Petsc 2.3.3p8:


Run1:   Iteration= 68      residual= 3.19515221e+000       tolerance=
5.13305158e+000 0

Run2:    Iteration= 68     residual= 3.19588481e+000       tolerance=
5.13305158e+000 0

Run3:    Iteration= 68     residual= 3.19384417e+000       tolerance=
5.13305158e+000 0


With Petsc 2.3.0:


Run1:   Iteration= 68      residual= 3.19369843e+000       tolerance=
5.13305158e+000 0

Run2:   Iteration= 68      residual= 3.19369843e+000       tolerance=
5.13305158e+000 0


If I made a 4proc run with a mesh partitioning such that any node could
be located on more than 2 proc. I did not face the problem.


I first thought about a MPI problem related to the order in which
messages are received and then summed.

But it would have been exactly the same with 2.3.0 ?


Any tips/ideas ?


Thanks by advance.

Best regards,


Etienne Perchat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the petsc-users mailing list