[petsc-users] ML and -pc_factor_shift_nonzero
Matthew Knepley
knepley at gmail.com
Mon Apr 19 06:34:08 CDT 2010
On Mon, Apr 19, 2010 at 6:29 AM, <tribur at vision.ee.ethz.ch> wrote:
> Hi Jed,
>
>
> ML works now using, e.g., -mg_coarse_redundant_pc_factor_shift_type
>>> POSITIVE_DEFINITE. However, it converges very slowly using the default
>>> REDUNDANT for the coarse solve.
>>>
>>
>> "Converges slowly" or "the coarse-level solve is expensive"?
>>
>
> hm, rather "converges slowly". Using ML inside a preconditioner for the
> Schur complement system, the overall outer system preconditioned with the
> approximated Schur complement preconditioner converges slowly, if you
> understand what I mean.
>
> My particular problem is that the convergence rate depends strongly on the
> number of processors. In case of one processor, using ML for preconditioning
> the deeply inner system the outer system converges in, e.g., 39 iterations.
> In case of np=10, however, it needs 69 iterations.
>
For Schur complement methods, the inner system usually has to be solved very
accurately.
Are you accelerating a Krylov method for A^{-1}, or just using ML itself? I
would expect for
the same linear system tolerance, you get identical convergence for the same
system,
independent of the number of processors.
Matt
> This number of iterations is independent on the number of processes using
> HYPRE (at least if np<80), but the latter is (applied to this inner system,
> not generally) slower and scales very badly. That's why I would like to use
> ML.
>
> Thinking about it, all this shouldn't have to do anything with the choice
> of the direct solver of the coarse system inside ML (mumps or petsc-own),
> should it? The direct solver solves completely, independently from the
> number of processes, and shouldn't have an influence on the effectiveness of
> ML, or am I wrong?
>
> I suggest
>> starting with
>>
>> -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_package mumps
>>
>> or varying parameters in ML to see if you can make the coarse level
>> problem smaller without hurting convergence rate. You can do
>> semi-redundant solves if you scale processor counts beyond what MUMPS
>> works well with.
>>
>
> Thanks. Thus, MUMPS is supposed to be the usually fastest parallel direct
> solver?
>
> Depending on what problem you are solving, ML could be producing a
>> (nearly) singular coarse level operator in which case you can expect
>> very confusing and inconsistent behavior.
>>
>
> Could it also be the reason for the decreased convergence rate when
> increasing from 1 to 10 processors? Even if the equation system remains the
> same?
>
>
> Thanks a lot,
>
> Kathrin
>
>
>
--
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100419/7a912612/attachment.htm>
More information about the petsc-users
mailing list