[petsc-users] Convergence is different with different processors
Barry Smith
bsmith at mcs.anl.gov
Sat Feb 16 16:50:21 CST 2013
On Feb 16, 2013, at 12:21 PM, w_ang_temp <w_ang_temp at 163.com> wrote:
>
>
> Hello, Barry
>
> Is it true that BJacobi preconditioner is only one type of ASM preconditioner(overlap=0), it is only one part of ASM?
> I find that sometimes the two have the same results while sometimes they are not. Is the reason that PETSc can
> internally get the better parameter when using ASM, and sometimes it use overlap=0 then ASM and BJacobi have the same
> results.
> If it is true, ASM is always better than BJacobi?
By default PETSc's ASM uses an overlap of 1, if you set the overlap to 0 then it is the same algorithm as block Jacobi.
Generally ASM with overlap > 0 converges better than block Jacobi but it might not on rare occasions. ASM with overlap > 1 may take more or less time than block Jacobi since it requires more time to set up and more time per iteration. So what is best in terms of time depends on the relative expense of the two methods and how much ASM improves the convergence over block Jacobi (if it does). There are entire books written on this subject and they barely scratch the surface in being able to predict for a given problem what approach is best.
Barry
> Thanks. Jim
>
>
>
>
> At 2013-02-17 01:57:46,"Barry Smith" <bsmith at mcs.anl.gov> wrote:
> >
> > This is completely possible. The convergence of iterative methods is a complicated business. Even something seemingly simple like block Jacobi whose conver> gence depends not only on the number of blocks, but the "shapes" of the blocks and what part of the matrix is "discarded" in the preconditioner depending on the > number and "shape" of the blocks. Decomposing the domain into 4 and 8 pieces results in different "shapes" of the blocks and different parts of the matrix being "> discarded" in the preconditioner.
> >
> > Barry
> >
> >On Feb 16, 2013, at 11:36 AM, w_ang_temp <w_ang_temp at 163.com> wrote:
> >
> >>
> >>
> >> I use the two commands in the same project. The first is divergent while the second is convergent.
> >> nohup mpiexec -n 4 ./ex4f -ksp_type bcgs -pc_type bjacobi -ksp_rtol 1.0e-5 -ksp_converged_reason >out.txt &
> >> nohup mpiexec -n 8 ./ex4f -ksp_type bcgs -pc_type bjacobi -ksp_rtol 1.0e-5 -ksp_converged_reason >out.txt &
> >> So what is the reason?
> >> Thanks. Jim
> >>
> >>
> >>
> >> >> 2013-02-07 13:16:51,"Matthew Knepley" <knepley at gmail.com> 写道:
> >> >> On Thu, Feb 7, 2013 at 12:11 AM, w_ang_temp <w_ang_temp at 163.com> wrote:
> >> >> Hello,
> >> >> I use the same project, but I find that when different number of processors is choosed,
> >> >> the convergence is different. For example, when the processors are 4, it is divergent; when
> >> >> the processors are 8, it is convergent.
> >> >> So what is the reason?
> >>
> >> >It is likely that your preconditioner changed.
> >>
> >> > Matt
> >>
> >> >> Thanks.
> >> >> Jim.
> >>
> >>
> >>
> >>
> >>
> >> --
> >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> >> -- Norbert Wiener
> >>
> >>
> >
>
>
>
More information about the petsc-users
mailing list