<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Mar 13, 2021 at 8:50 AM Pierre Jolivet <<a href="mailto:pierre@joliv.et">pierre@joliv.et</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;"><div>-pc_hypre_boomeramg_relax_type_all Jacobi => </div><div>  Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 3</div></div></blockquote><div><br></div><div>FYI, You need to use Chebyshev "KSP" with jacobi.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;"><div><div>-pc_hypre_boomeramg_relax_type_all l1scaled-Jacobi => </div></div><div>OK, independently of the architecture it seems (Eric Docker image with 1 or 2 threads or my macOS), but contraction factor is higher</div><div>  Linear solve converged due to CONVERGED_RTOL iterations 8</div><div>  Linear solve converged due to CONVERGED_RTOL iterations 24</div><div>  Linear solve converged due to CONVERGED_RTOL iterations 26</div><div>v. currently</div><div>  Linear solve converged due to CONVERGED_RTOL iterations 7</div><div>  Linear solve converged due to CONVERGED_RTOL iterations 9</div><div>  Linear solve converged due to CONVERGED_RTOL iterations 10</div><div><br></div><div>Do we change this? Or should we force OMP_NUM_THREADS=1 for make test?</div></div></blockquote><div><br></div><div>The default smoother is pretty good so I'd keep it in the test.</div><div>Mark</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;"><div><br></div><div>Thanks,</div><div>Pierre</div><div><br></div><div><blockquote type="cite"><div>On 13 Mar 2021, at 2:26 PM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:</div><br><div><div dir="ltr">Hypre uses a multiplicative smoother by default. It has a chebyshev smoother. That with a Jacobi PC should be thread invariant.<div><div>Mark</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Mar 13, 2021 at 8:18 AM Pierre Jolivet <<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><br><div><blockquote type="cite"><div>On 13 Mar 2021, at 9:17 AM, Pierre Jolivet <<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a>> wrote:</div><br><div><div>Hello Eric,<div>I’ve made an “interesting” discovery, so I’ll put back the list in c/c.</div><div>It appears the following snippet of code which uses Allreduce() + lambda function + MPI_IN_PLACE is:</div><div>- Valgrind-clean with MPICH;</div><div>- Valgrind-clean with OpenMPI 4.0.5;</div><div>- not Valgrind-clean with OpenMPI 4.1.0.</div><div>I’m not sure who is to blame here, I’ll need to look at the MPI specification for what is required by the implementors and users in that case.</div><div><br></div><div>In the meantime, I’ll do the following:</div><div>- update config/BuildSystem/config/packages/OpenMPI.py to use OpenMPI 4.1.0, see if any other error appears;</div><div>- provide a hotfix to bypass the segfaults;</div></div></div></blockquote><div><br></div><div>I can confirm that splitting the single Allreduce with my own MPI_Op into two Allreduce with MAX and BAND fixes the segfaults with OpenMPI (*).</div><br><blockquote type="cite"><div><div><div>- look at the hypre issue and whether they should be deferred to the hypre team.</div></div></div></blockquote><div><br></div><div>I don’t know if there is something wrong in hypre threading or if it’s just a side effect of threading, but it seems that the number of threads has a drastic effect on the quality of the PC.</div><div>By default, it looks that there are two threads per process with your Docker image.</div><div>If I force OMP_NUM_THREADS=1, then I get the same convergence as in the output file.</div><div><br></div><div>Thanks,</div><div>Pierre</div><div><br></div><div>(*) <a href="https://gitlab.com/petsc/petsc/-/merge_requests/3712" target="_blank">https://gitlab.com/petsc/petsc/-/merge_requests/3712</a></div><br><blockquote type="cite"><div><div><div>Thank you for the Docker files, they were really useful.</div><div>If you want to avoid oversubscription failures, you can edit the file /opt/openmpi-4.1.0/etc/openmpi-default-hostfile and append the line:</div><div>localhost slots=12</div><div>If you want to increase the timeout limit of PETSc test suite for each test, you can add the extra flag in your command line TIMEOUT=180 (default is 60, units are seconds).</div><div><br></div><div>Thanks, I’ll ping you on GitLab when I’ve got something ready for you to try,</div><div>Pierre<br><div><br></div><div></div></div></div><span id="gmail-m_-4321402420262701933gmail-m_3567963440499379521cid:15B6BE6E-0C96-4CBA-9ADC-EFB1DE1BDFC3"><ompi.cxx></span><div><div><div></div><div><br><blockquote type="cite"><div>On 12 Mar 2021, at 8:54 PM, Eric Chamberland <<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a>> wrote:</div><br><div>
  
    
  
  <div><p>Hi Pierre,</p><p>I now have a docker container reproducing the problems here.</p><p>Actually, if I look at snes_tutorials-ex12_quad_singular_hpddm 
      it fails like this:</p><p>not ok snes_tutorials-ex12_quad_singular_hpddm # Error code: 59<br>
      #       Initial guess<br>
      #       L_2 Error: 0.00803099<br>
      #       Initial Residual<br>
      #       L_2 Residual: 1.09057<br>
      #       Au - b = Au + F(0)<br>
      #       Linear L_2 Residual: 1.09057<br>
      #       [d470c54ce086:14127] Read -1, expected 4096, errno = 1<br>
      #       [d470c54ce086:14128] Read -1, expected 4096, errno = 1<br>
      #       [d470c54ce086:14129] Read -1, expected 4096, errno = 1<br>
      #       [3]PETSC ERROR:
      ------------------------------------------------------------------------<br>
      #       [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation
      Violation, probably memory access out of range<br>
      #       [3]PETSC ERROR: Try option -start_in_debugger or
      -on_error_attach_debugger<br>
      #       [3]PETSC ERROR: or see
      <a href="https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a><br>
      #       [3]PETSC ERROR: or try <a href="http://valgrind.org/" target="_blank">http://valgrind.org</a> on GNU/linux
      and Apple Mac OS X to find memory corruption errors<br>
      #       [3]PETSC ERROR: likely location of problem given in stack
      below<br>
      #       [3]PETSC ERROR: ---------------------  Stack Frames
      ------------------------------------<br>
      #       [3]PETSC ERROR: Note: The EXACT line numbers in the stack
      are not available,<br>
      #       [3]PETSC ERROR:       INSTEAD the line number of the start
      of the function<br>
      #       [3]PETSC ERROR:       is given.<br>
      #       [3]PETSC ERROR: [3] buildTwo line 987
      /opt/petsc-main/include/HPDDM_schwarz.hpp<br>
      #       [3]PETSC ERROR: [3] next line 1130
      /opt/petsc-main/include/HPDDM_schwarz.hpp<br>
      #       [3]PETSC ERROR: --------------------- Error Message
      --------------------------------------------------------------<br>
      #       [3]PETSC ERROR: Signal received<br>
      #       [3]PETSC ERROR: [0]PETSC ERROR:
      ------------------------------------------------------------------------</p><p>also ex12_quad_hpddm_reuse_baij fails with a lot more "Read -1,
      expected ..." which I don't know where they come from...?</p><p>Hypre (like in diff-snes_tutorials-ex56_hypre)  is also having
      DIVERGED_INDEFINITE_PC failures...</p><p>Please see the 3 attached docker files:</p><p>1) fedora_mkl_and_devtools : the DockerFile which install fedore
      33 with gnu compilers and MKL and everything to develop.</p><p>2) openmpi: the DockerFile to bluid OpenMPI</p><p>3) petsc: The las DockerFile that build/install and test PETSc</p><p>I build the 3 like this:</p><p>docker build -t fedora_mkl_and_devtools -f
      fedora_mkl_and_devtools .</p><p>docker build -t openmpi -f openmpi .</p><p>docker build -t petsc -f petsc .</p><p>Disclaimer: I am not a docker expert, so I may do things that are
      not docker-stat-of-the-art but I am opened to suggestions... ;)<br>
    </p><p>I have just ran it on my portable (long) which have not enough
      cores, so many more tests failed (should force --oversubscribe but
      don't know how to).  I will relaunch on my workstation in a few
      minutes.</p><p>I will now test your branch! (sorry for the delay).</p><p>Thanks,</p><p>Eric<br>
    </p>
    <div>On 2021-03-11 9:03 a.m., Eric
      Chamberland wrote:<br>
    </div>
    <blockquote type="cite"><p>Hi Pierre,</p><p>ok, that's interesting!</p><p>I will try to build a docker image until tomorrow and give you
        the exact recipe to reproduce the bugs.</p><p>Eric</p><p><br>
      </p>
      <div>On 2021-03-11 2:46 a.m., Pierre
        Jolivet wrote:<br>
      </div>
      <blockquote type="cite">
        
        <br>
        <div><br>
          <blockquote type="cite">
            <div>On 11 Mar 2021, at 6:16 AM, Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:</div>
            <br>
            <div>
              
              <div>
                <div><br>
                </div>
                  Eric,
                <div><br>
                </div>
                <div>   Sorry about not being more immediate.
                  We still have this in our active email so you don't
                  need to submit individual issues. We'll try to get to
                  them as soon as we can.</div>
              </div>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>Indeed, I’m still trying to figure this out.</div>
          <div>I realized that some of my configure flags were different
            than yours, e.g., no --with-memalign.</div>
          <div>I’ve also added SuperLU_DIST to my installation.</div>
          <div>Still, I can’t reproduce any issue.</div>
          <div>I will continue looking into this, it appears I’m seeing
            some valgrind errors, but I don’t know if this is some side
            effect of OpenMPI not being valgrind-clean (last time I
            checked, there was no error with MPICH).</div>
          <div><br>
          </div>
          <div>Thank you for your patience,</div>
          <div>Pierre</div>
          <div><br>
          </div>
          <div>
            <div>/usr/bin/gmake -f gmakefile test test-fail=1</div>
            <div>Using MAKEFLAGS: test-fail=1</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex12_quad_hpddm_reuse_baij.counts</div>
            <div> ok snes_tutorials-ex12_quad_hpddm_reuse_baij</div>
            <div> ok diff-snes_tutorials-ex12_quad_hpddm_reuse_baij</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tests-ex33_superlu_dist_2.counts</div>
            <div> ok ksp_ksp_tests-ex33_superlu_dist_2</div>
            <div> ok diff-ksp_ksp_tests-ex33_superlu_dist_2</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tests-ex49_superlu_dist.counts</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-0</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-0</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-1</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-1</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-0</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-0</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-1</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-1</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-0</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-0</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-1</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-1</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-0</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-0</div>
            <div> ok
              ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-1</div>
            <div> ok
              diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-1</div>
            <div>        TEST
              arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex50_tut_2.counts</div>
            <div> ok ksp_ksp_tutorials-ex50_tut_2</div>
            <div> ok diff-ksp_ksp_tutorials-ex50_tut_2</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tests-ex33_superlu_dist.counts</div>
            <div> ok ksp_ksp_tests-ex33_superlu_dist</div>
            <div> ok diff-ksp_ksp_tests-ex33_superlu_dist</div>
            <div>        TEST
              arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex56_hypre.counts</div>
            <div> ok snes_tutorials-ex56_hypre</div>
            <div> ok diff-snes_tutorials-ex56_hypre</div>
            <div>        TEST
              arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex56_2.counts</div>
            <div> ok ksp_ksp_tutorials-ex56_2</div>
            <div> ok diff-ksp_ksp_tutorials-ex56_2</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex17_3d_q3_trig_elas.counts</div>
            <div> ok snes_tutorials-ex17_3d_q3_trig_elas</div>
            <div> ok diff-snes_tutorials-ex17_3d_q3_trig_elas</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij.counts</div>
            <div> ok snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij</div>
            <div> ok
              diff-snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5_superlu_dist_3.counts</div>
            <div>not ok ksp_ksp_tutorials-ex5_superlu_dist_3 # Error
              code: 1</div>
            <div>#<span style="white-space:pre-wrap">     </span>srun:
              error: Unable to create step for job 1426755: More
              processors requested than permitted</div>
            <div> ok ksp_ksp_tutorials-ex5_superlu_dist_3 # SKIP Command
              failed so no diff</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5f_superlu_dist.counts</div>
            <div> ok ksp_ksp_tutorials-ex5f_superlu_dist # SKIP Fortran
              required for this test</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex12_tri_parmetis_hpddm_baij.counts</div>
            <div> ok snes_tutorials-ex12_tri_parmetis_hpddm_baij</div>
            <div> ok diff-snes_tutorials-ex12_tri_parmetis_hpddm_baij</div>
            <div>        TEST
              arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex19_tut_3.counts</div>
            <div> ok snes_tutorials-ex19_tut_3</div>
            <div> ok diff-snes_tutorials-ex19_tut_3</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex17_3d_q3_trig_vlap.counts</div>
            <div> ok snes_tutorials-ex17_3d_q3_trig_vlap</div>
            <div> ok diff-snes_tutorials-ex17_3d_q3_trig_vlap</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5f_superlu_dist_3.counts</div>
            <div> ok ksp_ksp_tutorials-ex5f_superlu_dist_3 # SKIP
              Fortran required for this test</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex19_superlu_dist.counts</div>
            <div> ok snes_tutorials-ex19_superlu_dist</div>
            <div> ok diff-snes_tutorials-ex19_superlu_dist</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex56_attach_mat_nearnullspace-1_bddc_approx_hypre.counts</div>
            <div> ok
              snes_tutorials-ex56_attach_mat_nearnullspace-1_bddc_approx_hypre</div>
            <div> ok
              diff-snes_tutorials-ex56_attach_mat_nearnullspace-1_bddc_approx_hypre</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex49_hypre_nullspace.counts</div>
            <div> ok ksp_ksp_tutorials-ex49_hypre_nullspace</div>
            <div> ok diff-ksp_ksp_tutorials-ex49_hypre_nullspace</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex19_superlu_dist_2.counts</div>
            <div> ok snes_tutorials-ex19_superlu_dist_2</div>
            <div> ok diff-snes_tutorials-ex19_superlu_dist_2</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5_superlu_dist_2.counts</div>
            <div>not ok ksp_ksp_tutorials-ex5_superlu_dist_2 # Error
              code: 1</div>
            <div>#<span style="white-space:pre-wrap">     </span>srun:
              error: Unable to create step for job 1426755: More
              processors requested than permitted</div>
            <div> ok ksp_ksp_tutorials-ex5_superlu_dist_2 # SKIP Command
              failed so no diff</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex56_attach_mat_nearnullspace-0_bddc_approx_hypre.counts</div>
            <div> ok
              snes_tutorials-ex56_attach_mat_nearnullspace-0_bddc_approx_hypre</div>
            <div> ok
              diff-snes_tutorials-ex56_attach_mat_nearnullspace-0_bddc_approx_hypre</div>
            <div>        TEST
              arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex64_1.counts</div>
            <div> ok ksp_ksp_tutorials-ex64_1</div>
            <div> ok diff-ksp_ksp_tutorials-ex64_1</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5_superlu_dist.counts</div>
            <div>not ok ksp_ksp_tutorials-ex5_superlu_dist # Error code:
              1</div>
            <div>#<span style="white-space:pre-wrap">     </span>srun:
              error: Unable to create step for job 1426755: More
              processors requested than permitted</div>
            <div> ok ksp_ksp_tutorials-ex5_superlu_dist # SKIP Command
              failed so no diff</div>
            <div>        TEST
arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5f_superlu_dist_2.counts</div>
            <div> ok ksp_ksp_tutorials-ex5f_superlu_dist_2 # SKIP
              Fortran required for this test</div>
          </div>
          <br>
          <blockquote type="cite">
            <div>
              <div>
                <div>   Barry</div>
                <div><br>
                  <div><br>
                    <blockquote type="cite">
                      <div>On Mar 10, 2021, at 11:03 PM, Eric
                        Chamberland <<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a>>
                        wrote:</div>
                      <br>
                      <div>
                        
                        <div><p>Barry,</p><p>to get a some follow up on
                            --with-openmp=1 failures, shall I open
                            gitlab issues for:</p><p>a) all hypre failures giving <span style="white-space:pre-wrap">DIVERGED_INDEFINITE_PC</span></p><p><span style="white-space:pre-wrap">b) all superlu_dist failures giving different results with </span><span style="white-space:pre-wrap"><span style="white-space:pre-wrap">initia and "Exceeded timeout limit of 60 s"</span></span></p><p><span style="white-space:pre-wrap"><span style="white-space:pre-wrap">c) hpddm failures "free(): invalid next size (fast)" and "Segmentation Violation"
</span></span></p><p><span style="white-space:pre-wrap"><span style="white-space:pre-wrap">d) all tao's </span></span><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap">"Exceeded timeout limit of 60 s"</span></span></span></span></p><p><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap">I don't see how I could do all these debugging by myself...</span></span></span></span></p><p><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap">Thanks,</span></span></span></span></p><p><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap"><span style="white-space:pre-wrap">Eric
</span></span></span></span></p>
                          <br>
                        </div>
                      </div>
                    </blockquote>
                  </div>
                  <br>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </blockquote>
      <pre cols="72">-- 
Eric Chamberland, ing., M. Ing
Professionnel de recherche
GIREF/Université Laval
(418) 656-2131 poste 41 22 42</pre>
    </blockquote>
    <pre cols="72">-- 
Eric Chamberland, ing., M. Ing
Professionnel de recherche
GIREF/Université Laval
(418) 656-2131 poste 41 22 42</pre>
  </div>

<span id="gmail-m_-4321402420262701933gmail-m_3567963440499379521cid:26EB476E-4C18-4B68-9DC9-6FBE92E94935"><fedora_mkl_and_devtools.txt></span><span id="gmail-m_-4321402420262701933gmail-m_3567963440499379521cid:EC379F4B-01BD-409E-8BBC-6FBA5A49236E"><openmpi.txt></span><span id="gmail-m_-4321402420262701933gmail-m_3567963440499379521cid:3CA91B7D-219A-4965-9FF3-D836488847A0"><petsc.txt></span></div></blockquote></div><br></div></div></div></blockquote></div><br></div></blockquote></div>
</div></blockquote></div><br></div></blockquote></div></div>