[petsc-users] Is linear solver performance is worse in parallel?

Lucas Clemente Vella lvella at gmail.com
Thu Jun 29 15:38:19 CDT 2017


Hi, I have a problem that is easily solvable with 8 processes, (by easily I
mean with few iterations). Using PCFIELDSPLIT, I get 2 outer iterations and
6 inner iterations, reaching residual norm of 1e-8. The system have 786432
unknowns in total, and the solver setting is given by:

    PetscOptionsInsertString(NULL,
        "-ksp_type fgmres "
        "-pc_type fieldsplit "
        "-pc_fieldsplit_detect_saddle_point "
        "-pc_fieldsplit_type schur "
        "-pc_fieldsplit_schur_fact_type full "
        "-pc_fieldsplit_schur_precondition self "
        "-fieldsplit_0_ksp_type bcgs "
        "-fieldsplit_0_pc_type hypre "
        "-fieldsplit_1_ksp_type gmres "
        "-fieldsplit_1_pc_type lsc "
        "-fieldsplit_1_lsc_pc_type hypre "
        "-fieldsplit_1_lsc_pc_hypre_boomeramg_cycle_type w");

Problem is, it is slow, (compared to less complex systems, solvable simply
with bcgs+hypre), and to try to speed things up, I've ran with 64
processes, which gives only 12288 unknowns per process. In this setting,
inner iteration reaches the maximum of 15 iterations I set, and the outer
iteration couldn't lower the residual norm from 1e2 after 20 iterations.

Is this supposed to happen? Increasing the number of parallel processes is
supposed to worsen the solver performance? I just want to clear this issue
from Petsc and Hypre side if possible, so if I ever experience such
behavior again, I can be sure my code is wrong...

-- 
Lucas Clemente Vella
lvella at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170629/16d6c078/attachment.html>


More information about the petsc-users mailing list