[petsc-users] Additive Schwarz Method output variable with processor number

Matija Kecman matijakecman at gmail.com
Wed Feb 16 05:54:10 CST 2011


Dear Petsc users,

I am new to Petsc and I have been compiling and running some of the
examples. I have been investigating the Additive Schwarz Method
example (ksp/ksp/examples/tutorials/ex8.c) using the 'Basic method'
i.e. by setting the overlap and using the default PETSc decomposition.
I was investigating the effect of using multiple processors using the
following bash script (-n1, -n2 are the mesh dimensions in the x- and
y-directions, -overlap specifies the overlap for the PCASMSetOverlap()
routine):

for proc in 1 2 3 4; do
mpirun -np $proc ex8 -machinesfile machinesfile -n1 500 -n2 500
-overlap 2 -pc_asm_blocks 4 -ksp_monitor_true_residual -sub_ksp_type
preonly -sub_pc_type lu > ./log_$proc.dat
done

After cleaning up the log files and plotting log ( ||Ae||/||Ax|| )
with iteration number I generated the attached figure. I am wondering
why the number of iterations for convergence depends on the number of
processors used? According to the FAQ:

'The convergence of many of the preconditioners in PETSc including the
the default parallel preconditioner block Jacobi depends on the number
of processes. The more processes the (slightly) slower convergence it
has. This is the nature of iterative solvers, the more parallelism
means the more "older" information is used in the solution process
hence slower convergence.'

but I seem to be observing the opposite effect.

Many thanks,

Matija
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ASM.pdf
Type: application/pdf
Size: 7173 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110216/27bbcde9/attachment.pdf>


More information about the petsc-users mailing list