[petsc-users] Poor weak scaling when solving successive linear systems
Lawrence Mitchell
lawrence.mitchell at imperial.ac.uk
Thu May 24 02:39:36 CDT 2018
> On 24 May 2018, at 06:24, Michael Becker <Michael.Becker at physik.uni-giessen.de> wrote:
>
> Could you have a look at the attached log_view files and tell me if something is particularly odd? The system size per processor is 30^3 and the simulation ran over 1000 timesteps, which means KSPsolve() was called equally often. I introduced two new logging states - one for the first solve and the final setup and one for the remaining solves.
The two attached logs use CG for the 125 proc run, but gcr for the 1000 proc run. Is this deliberate?
125 proc:
-gamg_est_ksp_type cg
-ksp_norm_type unpreconditioned
-ksp_type cg
-log_view
-mg_levels_esteig_ksp_max_it 10
-mg_levels_esteig_ksp_type cg
-mg_levels_ksp_max_it 1
-mg_levels_ksp_norm_type none
-mg_levels_ksp_type richardson
-mg_levels_pc_sor_its 1
-mg_levels_pc_type sor
-pc_gamg_type classical
-pc_type gamg
1000 proc:
-gamg_est_ksp_type cg
-ksp_norm_type unpreconditioned
-ksp_type gcr
-log_view
-mg_levels_esteig_ksp_max_it 10
-mg_levels_esteig_ksp_type cg
-mg_levels_ksp_max_it 1
-mg_levels_ksp_norm_type none
-mg_levels_ksp_type richardson
-mg_levels_pc_sor_its 1
-mg_levels_pc_type sor
-pc_gamg_type classical
-pc_type gamg
That aside, it looks like you have quite a bit of load imbalance. e.g. in the smoother, where you're doing MatSOR, you have:
125 proc:
Calls Time Max/Min time
MatSOR 47808 1.0 6.8888e+01 1.7
1000 proc:
MatSOR 41400 1.0 6.3412e+01 1.6
VecScatters show similar behaviour.
How is your problem distributed across the processes?
Cheers,
Lawrence
More information about the petsc-users
mailing list