[petsc-users] How to specify different MPI communication patterns.
Randall Mackie
rlmackie862 at gmail.com
Tue May 21 10:55:38 CDT 2024
Dear PETSc team,
A few years ago we were having some issue with MPI communications with large numbers of processes and subcomms, see this thread here:
https://urldefense.us/v3/__https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2020-April/040976.html__;!!G_uCfscf7eWS!fyPrzMKC4KZmxGO-HI0xUlOCbgwXod4O8q2h_6MjHqPLPj9ppLkgFkJUig-KqXgu6AX7pMhYtEpWOP_cCesCWcCk_Q$
We are once again encountering strange issues when running our code on a new cluster and after a month of various tests we have not found a solution, but we think it has something to do with network traffic and high MPI communications, similar perhaps to the thread from 3 years ago.
Is it still possible to change the communication pattern with the option -build_twosided_allreduce (and is that the right syntax?).
Are there other runtime options that we can try to change the MPI communication type for all underlying communications?
Thank you,
Randy M.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240521/24f39dd4/attachment.html>
More information about the petsc-users
mailing list