[petsc-users] Is linear solver performance is worse in parallel?

Barry Smith bsmith at mcs.anl.gov
Thu Jun 29 18:58:42 CDT 2017


> On Jun 29, 2017, at 3:38 PM, Lucas Clemente Vella <lvella at gmail.com> wrote:
> 
> Hi, I have a problem that is easily solvable with 8 processes, (by easily I mean with few iterations). Using PCFIELDSPLIT, I get 2 outer iterations and 6 inner iterations, reaching residual norm of 1e-8. The system have 786432 unknowns in total, and the solver setting is given by:
> 
>     PetscOptionsInsertString(NULL,
>         "-ksp_type fgmres "
>         "-pc_type fieldsplit "
>         "-pc_fieldsplit_detect_saddle_point "
>         "-pc_fieldsplit_type schur "
>         "-pc_fieldsplit_schur_fact_type full "
>         "-pc_fieldsplit_schur_precondition self "
>         "-fieldsplit_0_ksp_type bcgs "
>         "-fieldsplit_0_pc_type hypre "
>         "-fieldsplit_1_ksp_type gmres "
>         "-fieldsplit_1_pc_type lsc "
>         "-fieldsplit_1_lsc_pc_type hypre "
>         "-fieldsplit_1_lsc_pc_hypre_boomeramg_cycle_type w");
> 
> Problem is, it is slow, (compared to less complex systems, solvable simply with bcgs+hypre), and to try to speed things up, I've ran with 64 processes, which gives only 12288 unknowns per process. In this setting, inner iteration reaches the maximum of 15 iterations I set, and the outer iteration couldn't lower the residual norm from 1e2 after 20 iterations.
> 
> Is this supposed to happen? Increasing the number of parallel processes is supposed to worsen the solver performance?

   For many solvers yes increasing the number of processors but leaving everything else fixed does increase the number of iterations; the reason is simple, the more parallelism the more use of "older information" in the iteration since every process doesn't know the most recent values from the other processes. This Jacobi versus Gauss-Seidel iteration, GS uses newer information so almost always converges faster than Jacobi.

   Now appropriate algebraic multigrid does not suffer very much from this problem, so if the number of iterations increases dramatically with AMG this usually means that AMG is not appropriate for the problem or something is wrong.

  

   Barry


> I just want to clear this issue from Petsc and Hypre side if possible, so if I ever experience such behavior again, I can be sure my code is wrong...
> 
> -- 
> Lucas Clemente Vella
> lvella at gmail.com



More information about the petsc-users mailing list