[petsc-users] PETSc params

Mark Adams mfadams at lbl.gov
Thu Oct 8 08:56:28 CDT 2015

Pat, if you can fold this into some large scale experiments, that would be

-s2_pc_gamg_repartition true [false]
-s2_pc_gamg_mat_partitioning_type parmetis
-s2_pc_gamg_process_eq_limit 200 [10, 500]

duplicate this without s2_, for the other solver.

The first parameters is to repartition the coarse grids, or not.
The second parameter is not used if you are not repartitioning.
The third parameter governs the size of processor subdomains on coarse
grids.  So a larger value reduced the number of active processor faster.
Any kind of scan that you can do would be great. I am suggesting doing 10 &
500, as well as 200, which is a sort of default.

You don't have to do all of these permutations.  maybe just 4 tests: t/200,
F/200, t/10, t/500.

Also use -options_left, if you don't already have it, to test that these
parameters are spelled correctly.

I just need the log summary file.  The -ksp_view data might be useful
eventually, but you can start by just giving me the log files.

And remember to write up what you said about slow MPI collectives in
cray-petsc with multiple ranks per node on Titan ... and any data that you
might have. You can send petsc-users at mcs.anl.gov

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151008/be2aa302/attachment.html>

More information about the petsc-users mailing list