[petsc-users] Strong scaling concerns for PCBDDC with Vector FEM
neil liu
liufield at gmail.com
Thu Aug 15 16:03:41 CDT 2024
Dear Petsc developers,
Thanks for your previous help. Now, the PCBDDC can converge to 1e-8 with,
petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type bddc
-pc_bddc_coarse_redundant_pc_type svd -ksp_error_if_not_converged
-mat_type is -ksp_monitor -ksp_rtol 1e-8 -ksp_gmres_restart 5000 -ksp_view
-pc_bddc_use_local_mat_graph 0 -pc_bddc_dirichlet_pc_type ilu
-pc_bddc_neumann_pc_type gamg -pc_bddc_neumann_pc_gamg_esteig_ksp_max_it 10
-ksp_converged_reason -pc_bddc_neumann_approximate -ksp_max_it 500 -log_view
Then I used 2 cases for strong scaling test. One case only involves real
numbers (tetra #: 49,152; dof #: 324, 224 ) for matrix and rhs. The 2nd
case involves complex numbers (tetra #: 95,336; dof #: 611,432) due to
PML.
Case 1:
cpu # Time for 500 ksp steps (s) Parallel efficiency
PCsetup time(s)
2 234.7
3.12
4 126.6 0.92
1.62
8 84.97 0.69
1.26
However for Case 2,
cpu # Time for 500 ksp steps (s) Parallel efficiency
PCsetup time(s)
2 584.5
8.61
4 376.8 0.77
6.56
8 459.6 0.31
66.47
For these 2 cases, I checked the time for PCsetup as an example. It seems 8
cpus for case 2 used too much time on PCsetup.
Do you have any ideas about what is going on here?
Thanks,
Xiaodong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240815/fa2da2ab/attachment.html>
More information about the petsc-users
mailing list