[petsc-users] Additive Schwarz Method output variable with processor number

Jed Brown jed at 59A2.org
Wed Feb 16 10:28:09 CST 2011


On Wed, Feb 16, 2011 at 12:54, Matija Kecman <matijakecman at gmail.com> wrote:

> After cleaning up the log files and plotting log ( ||Ae||/||Ax|| )
> with iteration number I generated the attached figure. I am wondering
> why the number of iterations for convergence depends on the number of
> processors used? According to the FAQ:
>
> 'The convergence of many of the preconditioners in PETSc including the
> the default parallel preconditioner block Jacobi depends on the number
> of processes. The more processes the (slightly) slower convergence it
> has. This is the nature of iterative solvers, the more parallelism
> means the more "older" information is used in the solution process
> hence slower convergence.'
>
> but I seem to be observing the opposite effect.
>

You are using the same number of subdomains, but they are shaped
differently. It seems likely that you have Parmetis installed in which case
PCASM uses it to partition multiple subdomains on each process. In this
case, those domains are not as good as the rectangular partition that you
get by using more processes. Compare:

$ mpiexec -n 1 ./ex8 -m 200 -n 200 -sub_pc_type lu -ksp_converged_reason
-pc_type asm -pc_asm_blocks 4 -mat_partitioning_type parmetis
Linear solve converged due to CONVERGED_RTOL iterations 27
$ mpiexec -n 1 ./ex8 -m 200 -n 200 -sub_pc_type lu -ksp_converged_reason
-pc_type asm -pc_asm_blocks 4 -mat_partitioning_type square
Linear solve converged due to CONVERGED_RTOL iterations 22
$ mpiexec -n 4 ./ex8 -m 200 -n 200 -sub_pc_type lu -ksp_converged_reason
-pc_type asm -pc_asm_blocks 4
Linear solve converged due to CONVERGED_RTOL iterations 22
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110216/98980a3c/attachment.htm>


More information about the petsc-users mailing list