<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hello Eric,<div class="">I’ve made an “interesting” discovery, so I’ll put back the list in c/c.</div><div class="">It appears the following snippet of code which uses Allreduce() + lambda function + MPI_IN_PLACE is:</div><div class="">- Valgrind-clean with MPICH;</div><div class="">- Valgrind-clean with OpenMPI 4.0.5;</div><div class="">- not Valgrind-clean with OpenMPI 4.1.0.</div><div class="">I’m not sure who is to blame here, I’ll need to look at the MPI specification for what is required by the implementors and users in that case.</div><div class=""><br class=""></div><div class="">In the meantime, I’ll do the following:</div><div class="">- update config/BuildSystem/config/packages/OpenMPI.py to use OpenMPI 4.1.0, see if any other error appears;</div><div class="">- provide a hotfix to bypass the segfaults;</div><div class="">- look at the hypre issue and whether they should be deferred to the hypre team.</div><div class=""><br class=""></div><div class="">Thank you for the Docker files, they were really useful.</div><div class="">If you want to avoid oversubscription failures, you can edit the file /opt/openmpi-4.1.0/etc/openmpi-default-hostfile and append the line:</div><div class="">localhost slots=12</div><div class="">If you want to increase the timeout limit of PETSc test suite for each test, you can add the extra flag in your command line TIMEOUT=180 (default is 60, units are seconds).</div><div class=""><br class=""></div><div class="">Thanks, I’ll ping you on GitLab when I’ve got something ready for you to try,</div><div class="">Pierre<br class=""><div><br class=""></div><div></div></div></body></html>