Non repeatability issue

Barry Smith bsmith at mcs.anl.gov
Wed Mar 12 08:41:58 CDT 2008


   Aldo,

1)    Have you made runs where you require, say -ksp_rtol 1.e-12 to  
eliminate the effects of
not solving the linear systems accurately?

2) Have you run the exact example that you ran with geometric  
decomposition also with
the parmetis decomposition? Is that what you sent? (This is too  
eliminate any fundamental
differences with the two problems.)

3) In your plots you plot L_2 norm of the mass residual while Newton  
is running on all
equations. This means the Newton's criteria for progress is based on  
|| u,v,m .....||
as it chugs along. What do plots of || u,v, m....|| (that is what  
Newton calls the residual
when you use -snes_monitor) look like, are they also unstable? Are  
they decreasing?
Sometimes people scale some equations stronger than others if those  
are the residuals
they are most interested in pushing down. What happens if you scale  
the mass residual
equations by some factor (say 100 or 1000) in your FormFunction?

4) Getting MPI to do the updates the same every run is not trivial;  
I'll think about how
it might be done.

    Barry


On Mar 11, 2008, at 11:31 AM, Aldo Bonfiglioli wrote:

> Dear all,
> this is a follow-up to an old mail concerning non-repeatibilityIn
> issues in a parallel environment.
>
> We are solving the steady 3D RANS eqns using
> Newtons's algorithm. All equations (turbulence included)
> are fully coupled.
>
> Our non-linear convergence history shows remarkable
> non-repeatibility among subsequent parallel runs
> (see the enclosed plot referred to as MeTis partitioning).
>
> We believe (but of course cannot rule out other
> reasons) that this is due to the fact that
> within those "ghosted" nodes that are shared by more
> than two sub-domains the rhs will be slightly
> different among subsequent runs
> depending on the order by which contributions
> are accumulated in these interfacial nodes.
> Since we push convergence down to machine accuracy,
> this may not be irrelevant.
>
> We then devised an alternative partitioning
> (though applicable only on very simple geometries)
> that guarantees that "ghosted" nodes are shared
> by not more than two procs. This guarantees that
> not more than two contributions will be accumulated
> in the ghosted nodes.
> Using this partitioning (referred to as geometric
> in the enclosed plots)
> subsequent runs give indeed identical results.
>
> In all cases, of course, we start from identical initial
> solutions.
>
> A colleague of ours has suggested the following:
>
>> This could be a problem of GMRES least-squares
>> system being poorly conditioned. I would try set a higher (very  
>> high in
>> fact) the tolerance for the inner (linear system) solve and see  
>> whether
>> the outer (Newton) iterations become more predictable.
>
>
>> The other possibility is indeed in the communication "inexectness".  
>> Are
>> there any asynchronous communications used in PETSc or in your code.
>> Change all you can to synchronous communications and see whether  
>> results
>> become more stable.
>
> Could you comment in particular on this latter?
> Is there a way we can modify PETSc's behaviour via command line
> options or otherwise?
>
> Regards,
> Aldo
>
> -- 
> Dr. Aldo Bonfiglioli
> Dip.to di Ingegneria e Fisica dell'Ambiente (DIFA)
> Universita' della Basilicata
> V.le dell'Ateneo lucano, 10 85100 Potenza ITALY
> tel:+39.0971.205203 fax:+39.0971.205160
>
> <geometricPartitioning.pdf><MetisPartitioning.pdf>




More information about the petsc-users mailing list