[petsc-dev] [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6?
Mark F. Adams
mark.adams at columbia.edu
Thu Apr 18 09:58:44 CDT 2013
>
> On the related issues with the ICC preconditioner, I do suspect that ML has a ( or another) bug in it that is being triggered by the recent ICC changes, but we don't have time to chase it down right now. Note that we have spotted a number of bugs/memory issues in ML over the past couple years, so this wouldn't be surprising.
>
This does not have anything to do with ML. Its not obvious but PETSc just uses ML to get the AMG prolongation operators and then uses this in its MG framework. So smoothers are all in PETSc and do not interact with ML.
hypre, on the other hand, is more monolithic and you get all hypre stuff as a PC: smoothers, etc. The outer Krylov solver is in PETSc.
hypre would be worth trying.
> Related: I know Bob Taylor fairly well and have followed your work for some time. I was one of the people behind the push to try and use your preconditioner at SIMULA for Abaqus/Standard and possibly Abaqus/CFD.
>
> Do you think that your preconditioner would do a good job on this seemingly benign, but typically very stiff system? We haven't taken the time to try it with our PETSc interface.
>
My solver (gamg) uses the same algorithm as ML. You should be able to get the same results, more or less. One important parameter in ML and gamg is the threshold dropping tolerance. This can be very useful for anisotropic problems (grids in your case) because it drops weak connections and gives you semi-coarsening (very useful for anisotropic problems). ML defaults to zero and gamg has a small default value that could make a big difference. You might try playing with "-pc_ml_Threshold <0>". I would try numbers like:
0.0 (the default, just to check)
.0001
.001
.01
.05
.1
I would expect that you would see, perhaps large, drops in iteration counts with this but the cost of each iteration does go up.
Mark
> Thanks again.
>
> - Mark
>
> --
> Mark A. Christon
> Computational Physics Group (CCS-2)
> Computer, Computational and Statistical Sciences Division
> Los Alamos National Laboratory
> MS D413, P.O. Box 1663
> Los Alamos, NM 87545
>
> E-mail: christon at lanl.gov
> Phone: (505) 663-5124
> Mobile: (505) 695-5649 (voice mail)
>
> International Journal for Numerical Methods in Fluids
>
> From: "Mark F. Adams" <mark.adams at columbia.edu>
> Date: Thu, 18 Apr 2013 10:12:51 -0400
> To: "Mark A. Christon" <christon at lanl.gov>
> Cc: PETSc users list <petsc-users at mcs.anl.gov>, "Mark F. Adams" <mark.adams at columbia.edu>
> Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6?
>
>> Note, you need to damp jacobi for a smoother with something like this:
>>
>> -mg_levels_ksp_type chebyshev
>> -mg_levels_ksp_chebyshev_estimate_eigenvalues 0,0.1,0,1.05
>>
>>
>> On Apr 18, 2013, at 9:49 AM, "Christon, Mark A" <christon at lanl.gov> wrote:
>>
>>> HI Mark,
>>>
>>> Thanks for the information. We thought something had changed and could see it the effect, but couldn't quite pin it down.
>>>
>>> To be clear our pressure-Poisson equation is a warm and fluffy Laplacian, but typically quite stiff from a spectral point of view when dealing with boundary-layer meshes and complex geometry — our norm. This is the first-order computational cost in our flow solver, so hit's like the recent change are very problematic, and particularly so when they break a number of regression tests that run nightly across multiple platflorms.
>>>
>>> So, unfortunately, while I'd like to use something as Jacobi, it's completely ineffective in for the operator (and RHS) in question.
>>>
>>> Thanks.
>>>
>>> - Mark
>>>
>>> --
>>> Mark A. Christon
>>> Computational Physics Group (CCS-2)
>>> Computer, Computational and Statistical Sciences Division
>>> Los Alamos National Laboratory
>>> MS D413, P.O. Box 1663
>>> Los Alamos, NM 87545
>>>
>>> E-mail: christon at lanl.gov
>>> Phone: (505) 663-5124
>>> Mobile: (505) 695-5649 (voice mail)
>>>
>>> International Journal for Numerical Methods in Fluids
>>>
>>> From: "Mark F. Adams" <mark.adams at columbia.edu>
>>> Reply-To: PETSc users list <petsc-users at mcs.anl.gov>
>>> Date: Wed, 17 Apr 2013 19:42:47 -0400
>>> To: PETSc users list <petsc-users at mcs.anl.gov>
>>> Subject: Re: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6?
>>>
>>>> In looking at the logs for icc it looks like Hong has done a little messing around with the shifting tolerance:
>>>>
>>>> - ((PC_Factor*)icc)->info.shiftamount = 1.e-12;
>>>> - ((PC_Factor*)icc)->info.zeropivot = 1.e-12;
>>>> + ((PC_Factor*)icc)->info.shiftamount = 100.0*PETSC_MACHINE_EPSILON;
>>>> + ((PC_Factor*)icc)->info.zeropivot = 100.0*PETSC_MACHINE_EPSILON;
>>>>
>>>>
>>>> This looks like it would lower the shifting and drop tolerance. You might set these back to 1e-12.
>>>>
>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetZeroPivot.html
>>>>
>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetShiftAmount.html
>>>>
>>>> BTW, using an indefinite preconditioner, that has to be fixed with is-this-a-small-number kind of code, on a warm and fluffy Laplacian is not recommended. As I said before I would just use jacobi -- god gave you an easy problem. Exploit it.
>>>>
>>>> On Apr 17, 2013, at 7:22 PM, "Mark F. Adams" <mark.adams at columbia.edu> wrote:
>>>>
>>>>>
>>>>>
>>>>> Begin forwarded message:
>>>>>
>>>>>> From: "Christon, Mark A" <christon at lanl.gov>
>>>>>> Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6?
>>>>>> Date: April 17, 2013 7:06:11 PM EDT
>>>>>> To: "Mark F. Adams" <mark.adams at columbia.edu>, "Bakosi, Jozsef" <jbakosi at lanl.gov>
>>>>>>
>>>>>> Hi Mark,
>>>>>>
>>>>>> Yes, looks like the new version does a little better after 2 iterations, but at the 8th iteration, the residuals increase:(
>>>>>>
>>>>>> I suspect this is why PETSc is whining about an indefinite preconditioner.
>>>>>>
>>>>>> Something definitely changes as we've had about 6-8 regression tests start failing that have been running flawlessly with ML + PETSc 3.1-p8 for almost two years.
>>>>>>
>>>>>> If we can understand what changed, we probably have a fighting chance of correcting it — assuming it's some solver setting for PETSc that we're not currently using.
>>>>>>
>>>>>> - Mark
>>>>>>
>>>>>> --
>>>>>> Mark A. Christon
>>>>>> Computational Physics Group (CCS-2)
>>>>>> Computer, Computational and Statistical Sciences Division
>>>>>> Los Alamos National Laboratory
>>>>>> MS D413, P.O. Box 1663
>>>>>> Los Alamos, NM 87545
>>>>>>
>>>>>> E-mail: christon at lanl.gov
>>>>>> Phone: (505) 663-5124
>>>>>> Mobile: (505) 695-5649 (voice mail)
>>>>>>
>>>>>> International Journal for Numerical Methods in Fluids
>>>>>>
>>>>>> From: "Mark F. Adams" <mark.adams at columbia.edu>
>>>>>> Date: Wed, 17 Apr 2013 18:51:02 -0400
>>>>>> To: PETSc users list <petsc-users at mcs.anl.gov>
>>>>>> Cc: "Mark A. Christon" <christon at lanl.gov>
>>>>>> Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6?
>>>>>>
>>>>>>> I see you are using icc. Perhaps our icc changed a bit between versions. These results look like both solves are working and the old does a little better (after two iterations).
>>>>>>>
>>>>>>> Try using jacobi instead of icc.
>>>>>>>
>>>>>>>
>>>>>>> On Apr 17, 2013, at 6:21 PM, Jozsef Bakosi <jbakosi at lanl.gov> wrote:
>>>>>>>
>>>>>>>>> On 04.17.2013 15:38, Matthew Knepley wrote:
>>>>>>>>>> On 04.17.2013 14:26, Jozsef Bakosi wrote:
>>>>>>>>>>> Mark F. Adams mark.adams at columbia.edu
>>>>>>>>>>> Wed Apr 17 14:25:04 CDT 2013
>>>>>>>>>>> 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the
>>>>>>>>>>> preconditioner
>>>>>>>>>>> really is indefinite (or possible non-symmetric). We improved the checking
>>>>>>>>>>> for this in one
>>>>>>>>>>> of those releases.
>>>>>>>>>>> AMG does not guarantee an SPD preconditioner so why persist in trying to use
>>>>>>>>>>> CG?
>>>>>>>>>>> AMG is positive if everything is working correctly.
>>>>>>>>>>> Are these problems only semidefinite? Singular systems can give erratic
>>>>>>>>>>> behavior.
>>>>>>>>>> It is a Laplace operator from Galerkin finite elements. And the PC is fine on
>>>>>>>>>> ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the
>>>>>>>>>> same PC should be positive on 4 as well.
>>>>>>>>> Why is it safe? Because it sounds plausible? Mathematics is replete with things
>>>>>>>>> that sound plausible and are false. Are there proofs that suggest this? Is there
>>>>>>>>> computational evidence? Why would I believe you?
>>>>>>>> Okay, so here is some additional information:
>>>>>>>> I tried both old and new PETSc versions again, but now only taking 2 iterations
>>>>>>>> (both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in
>>>>>>>> both cases, however, the residuals are different after both iterations:
>>>>>>>> Please do a diff on the attached files and you can verify that the ML
>>>>>>>> diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc,
>>>>>>>> while the norm coming out of the solver at the end at both iterations are
>>>>>>>> different.
>>>>>>>> We reproduced the same exact behavior on two different linux platforms.
>>>>>>>> Once again: same application source code, same ML source code, different PETSc:
>>>>>>>> 3.1-p8 vs. 3.3-p6.
>>>>>>>> <old.out><new.out>
>>>>>>>
>>>>>>>
>>>>>
>>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130418/5e36d47b/attachment.html>
More information about the petsc-dev
mailing list