[petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6?
Christon, Mark A
christon at lanl.gov
Thu Apr 18 08:49:15 CDT 2013
HI Mark,
Thanks for the information. We thought something had changed and could see it the effect, but couldn't quite pin it down.
To be clear our pressure-Poisson equation is a warm and fluffy Laplacian, but typically quite stiff from a spectral point of view when dealing with boundary-layer meshes and complex geometry — our norm. This is the first-order computational cost in our flow solver, so hit's like the recent change are very problematic, and particularly so when they break a number of regression tests that run nightly across multiple platflorms.
So, unfortunately, while I'd like to use something as Jacobi, it's completely ineffective in for the operator (and RHS) in question.
Thanks.
- Mark
--
Mark A. Christon
Computational Physics Group (CCS-2)
Computer, Computational and Statistical Sciences Division
Los Alamos National Laboratory
MS D413, P.O. Box 1663
Los Alamos, NM 87545
E-mail: christon at lanl.gov<mailto:christon at lanl.gov>
Phone: (505) 663-5124
Mobile: (505) 695-5649 (voice mail)
International Journal for Numerical Methods in Fluids<http://wileyonlinelibrary.com/journal/fld>
From: "Mark F. Adams" <mark.adams at columbia.edu<mailto:mark.adams at columbia.edu>>
Reply-To: PETSc users list <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>>
Date: Wed, 17 Apr 2013 19:42:47 -0400
To: PETSc users list <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>>
Subject: Re: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6?
In looking at the logs for icc it looks like Hong has done a little messing around with the shifting tolerance:
- ((PC_Factor*)icc)->info.shiftamount = 1.e-12;
<https://bitbucket.org/petsc/petsc/src/0ed735ceb4be/src/ksp/pc/impls/factor/icc/icc.c#Lsrc/ksp/pc/impls/factor/icc/icc.cF200>
- ((PC_Factor*)icc)->info.zeropivot = 1.e-12;
<https://bitbucket.org/petsc/petsc/src/0ed735ceb4be/src/ksp/pc/impls/factor/icc/icc.c#Lsrc/ksp/pc/impls/factor/icc/icc.cT199>
+ ((PC_Factor*)icc)->info.shiftamount = 100.0*PETSC_MACHINE_EPSILON;
<https://bitbucket.org/petsc/petsc/src/0ed735ceb4be/src/ksp/pc/impls/factor/icc/icc.c#Lsrc/ksp/pc/impls/factor/icc/icc.cT200>
+ ((PC_Factor*)icc)->info.zeropivot = 100.0*PETSC_MACHINE_EPSILON;
This looks like it would lower the shifting and drop tolerance. You might set these back to 1e-12.
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetZeroPivot.html
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetShiftAmount.html
BTW, using an indefinite preconditioner, that has to be fixed with is-this-a-small-number kind of code, on a warm and fluffy Laplacian is not recommended. As I said before I would just use jacobi -- god gave you an easy problem. Exploit it.
On Apr 17, 2013, at 7:22 PM, "Mark F. Adams" <mark.adams at columbia.edu<mailto:mark.adams at columbia.edu>> wrote:
Begin forwarded message:
From: "Christon, Mark A" <christon at lanl.gov<mailto:christon at lanl.gov>>
Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6?
Date: April 17, 2013 7:06:11 PM EDT
To: "Mark F. Adams" <mark.adams at columbia.edu<mailto:mark.adams at columbia.edu>>, "Bakosi, Jozsef" <jbakosi at lanl.gov<mailto:jbakosi at lanl.gov>>
Hi Mark,
Yes, looks like the new version does a little better after 2 iterations, but at the 8th iteration, the residuals increase:(
I suspect this is why PETSc is whining about an indefinite preconditioner.
Something definitely changes as we've had about 6-8 regression tests start failing that have been running flawlessly with ML + PETSc 3.1-p8 for almost two years.
If we can understand what changed, we probably have a fighting chance of correcting it — assuming it's some solver setting for PETSc that we're not currently using.
- Mark
--
Mark A. Christon
Computational Physics Group (CCS-2)
Computer, Computational and Statistical Sciences Division
Los Alamos National Laboratory
MS D413, P.O. Box 1663
Los Alamos, NM 87545
E-mail: christon at lanl.gov<mailto:christon at lanl.gov>
Phone: (505) 663-5124
Mobile: (505) 695-5649 (voice mail)
International Journal for Numerical Methods in Fluids<http://wileyonlinelibrary.com/journal/fld>
From: "Mark F. Adams" <mark.adams at columbia.edu<mailto:mark.adams at columbia.edu>>
Date: Wed, 17 Apr 2013 18:51:02 -0400
To: PETSc users list <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>>
Cc: "Mark A. Christon" <christon at lanl.gov<mailto:christon at lanl.gov>>
Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6?
I see you are using icc. Perhaps our icc changed a bit between versions. These results look like both solves are working and the old does a little better (after two iterations).
Try using jacobi instead of icc.
On Apr 17, 2013, at 6:21 PM, Jozsef Bakosi <jbakosi at lanl.gov<mailto:jbakosi at lanl.gov>> wrote:
On 04.17.2013 15:38, Matthew Knepley wrote:
On 04.17.2013 14:26, Jozsef Bakosi wrote:
Mark F. Adams mark.adams at columbia.edu<http://columbia.edu/>
Wed Apr 17 14:25:04 CDT 2013
2) If you get "Indefinite PC" (I am guessing from using CG) it is because the
preconditioner
really is indefinite (or possible non-symmetric). We improved the checking
for this in one
of those releases.
AMG does not guarantee an SPD preconditioner so why persist in trying to use
CG?
AMG is positive if everything is working correctly.
Are these problems only semidefinite? Singular systems can give erratic
behavior.
It is a Laplace operator from Galerkin finite elements. And the PC is fine on
ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the
same PC should be positive on 4 as well.
Why is it safe? Because it sounds plausible? Mathematics is replete with things
that sound plausible and are false. Are there proofs that suggest this? Is there
computational evidence? Why would I believe you?
Okay, so here is some additional information:
I tried both old and new PETSc versions again, but now only taking 2 iterations
(both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in
both cases, however, the residuals are different after both iterations:
Please do a diff on the attached files and you can verify that the ML
diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc,
while the norm coming out of the solver at the end at both iterations are
different.
We reproduced the same exact behavior on two different linux platforms.
Once again: same application source code, same ML source code, different PETSc:
3.1-p8 vs. 3.3-p6.
<old.out><new.out>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130418/f65ba345/attachment.html>
More information about the petsc-users
mailing list