<div dir="ltr"><div dir="ltr">On Thu, Mar 18, 2021 at 11:51 PM Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Note that this is specific to the node numbering, and that node numbering tends to produce poor results even for MatMult due to poor cache reuse of the vector. It's good practice after partitioning to use a locality-preserving ordering of dofs on a process (e.g., RCM if you use MatOrdering). This was shown in the PETSc-FUN3D papers circa 1999 and has been confirmed multiple times over the years by various members of this list (including me). I believe FEniCS and libMesh now do this by default (or at least have an option) and it was shown to perform better. It's a notable weakness of DMPlex that it does not apply such an ordering of dofs and I've complained to Matt about it many times over the years, but any blame rests solely with me for not carving out time to implement it here.<br></blockquote><div><br></div><div>Jesus. Of course Plex can do this. It is the default for PyLith. Less complaining, more looking.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Better SGS/SOR smoothing factors with simple OpenMP partitioning is an additional bonus, though I'm not a fan of using OpenMP in this way.<br>
<br>
Eric Chamberland <<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a>> writes:<br>
<br>
> Hi,<br>
><br>
> For the knowledge of readers, I just read section 7.3 here:<br>
><br>
> <a href="https://www.researchgate.net/publication/220411740_Multigrid_Smoothers_for_Ultraparallel_Computing" rel="noreferrer" target="_blank">https://www.researchgate.net/publication/220411740_Multigrid_Smoothers_for_Ultraparallel_Computing</a><br>
><br>
> And it is explained why multi-threading gives a poor result with the <br>
> Hybrid−SGS smoother...<br>
><br>
> Eric<br>
><br>
><br>
> On 2021-03-15 2:50 p.m., Barry Smith wrote:<br>
>><br>
>> I posted some information at the issue.<br>
>><br>
>> IMHO it is likely a bug in one or more of hypre's smoothers that <br>
>> use OpenMP. We have never tested them before (and likely hypre has not <br>
>> tested all the combinations) and so would not have seen the bug. <br>
>> Hopefully they can just fix it.<br>
>><br>
>> Barry<br>
>><br>
>> I got the problem to occur with ex56 with 2 MPI ranks and 4 OpenMP <br>
>> threads, if I used less than 4 threads it did not generate an <br>
>> indefinite preconditioner.<br>
>><br>
>><br>
>>> On Mar 14, 2021, at 1:18 PM, Eric Chamberland <br>
>>> <<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a> <br>
>>> <mailto:<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a>>> wrote:<br>
>>><br>
>>> Done:<br>
>>><br>
>>> <a href="https://github.com/hypre-space/hypre/issues/303" rel="noreferrer" target="_blank">https://github.com/hypre-space/hypre/issues/303</a><br>
>>><br>
>>> Maybe I will need some help about PETSc to answer their questions...<br>
>>><br>
>>> Eric<br>
>>><br>
>>> On 2021-03-14 3:44 a.m., Stefano Zampini wrote:<br>
>>>> Eric<br>
>>>><br>
>>>> You should report these HYPRE issues upstream <br>
>>>> <a href="https://github.com/hypre-space/hypre/issues" rel="noreferrer" target="_blank">https://github.com/hypre-space/hypre/issues</a> <br>
>>>> <<a href="https://github.com/hypre-space/hypre/issues" rel="noreferrer" target="_blank">https://github.com/hypre-space/hypre/issues</a>><br>
>>>><br>
>>>><br>
>>>>> On Mar 14, 2021, at 3:44 AM, Eric Chamberland <br>
>>>>> <<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a> <br>
>>>>> <mailto:<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a>>> wrote:<br>
>>>>><br>
>>>>> For us it clearly creates problems in real computations...<br>
>>>>><br>
>>>>> I understand the need to have clean test for PETSc, but for me, it <br>
>>>>> reveals that hypre isn't usable with more than one thread for now...<br>
>>>>><br>
>>>>> Another solution: force single-threaded configuration for hypre <br>
>>>>> until this is fixed?<br>
>>>>><br>
>>>>> Eric<br>
>>>>><br>
>>>>> On 2021-03-13 8:50 a.m., Pierre Jolivet wrote:<br>
>>>>>> -pc_hypre_boomeramg_relax_type_all Jacobi =><br>
>>>>>> Linear solve did not converge due to DIVERGED_INDEFINITE_PC <br>
>>>>>> iterations 3<br>
>>>>>> -pc_hypre_boomeramg_relax_type_all l1scaled-Jacobi =><br>
>>>>>> OK, independently of the architecture it seems (Eric Docker image <br>
>>>>>> with 1 or 2 threads or my macOS), but contraction factor is higher<br>
>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 8<br>
>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 24<br>
>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 26<br>
>>>>>> v. currently<br>
>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 7<br>
>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 9<br>
>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 10<br>
>>>>>><br>
>>>>>> Do we change this? Or should we force OMP_NUM_THREADS=1 for make test?<br>
>>>>>><br>
>>>>>> Thanks,<br>
>>>>>> Pierre<br>
>>>>>><br>
>>>>>>> On 13 Mar 2021, at 2:26 PM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a> <br>
>>>>>>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>>> wrote:<br>
>>>>>>><br>
>>>>>>> Hypre uses a multiplicative smoother by default. It has a <br>
>>>>>>> chebyshev smoother. That with a Jacobi PC should be thread <br>
>>>>>>> invariant.<br>
>>>>>>> Mark<br>
>>>>>>><br>
>>>>>>> On Sat, Mar 13, 2021 at 8:18 AM Pierre Jolivet <<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a> <br>
>>>>>>> <mailto:<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a>>> wrote:<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>>> On 13 Mar 2021, at 9:17 AM, Pierre Jolivet <<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a><br>
>>>>>>>> <mailto:<a href="mailto:pierre@joliv.et" target="_blank">pierre@joliv.et</a>>> wrote:<br>
>>>>>>>><br>
>>>>>>>> Hello Eric,<br>
>>>>>>>> I’ve made an “interesting” discovery, so I’ll put back the<br>
>>>>>>>> list in c/c.<br>
>>>>>>>> It appears the following snippet of code which uses<br>
>>>>>>>> Allreduce() + lambda function + MPI_IN_PLACE is:<br>
>>>>>>>> - Valgrind-clean with MPICH;<br>
>>>>>>>> - Valgrind-clean with OpenMPI 4.0.5;<br>
>>>>>>>> - not Valgrind-clean with OpenMPI 4.1.0.<br>
>>>>>>>> I’m not sure who is to blame here, I’ll need to look at the<br>
>>>>>>>> MPI specification for what is required by the implementors<br>
>>>>>>>> and users in that case.<br>
>>>>>>>><br>
>>>>>>>> In the meantime, I’ll do the following:<br>
>>>>>>>> - update config/BuildSystem/config/packages/OpenMPI.py to<br>
>>>>>>>> use OpenMPI 4.1.0, see if any other error appears;<br>
>>>>>>>> - provide a hotfix to bypass the segfaults;<br>
>>>>>>><br>
>>>>>>> I can confirm that splitting the single Allreduce with my own<br>
>>>>>>> MPI_Op into two Allreduce with MAX and BAND fixes the<br>
>>>>>>> segfaults with OpenMPI (*).<br>
>>>>>>><br>
>>>>>>>> - look at the hypre issue and whether they should be<br>
>>>>>>>> deferred to the hypre team.<br>
>>>>>>><br>
>>>>>>> I don’t know if there is something wrong in hypre threading<br>
>>>>>>> or if it’s just a side effect of threading, but it seems that<br>
>>>>>>> the number of threads has a drastic effect on the quality of<br>
>>>>>>> the PC.<br>
>>>>>>> By default, it looks that there are two threads per process<br>
>>>>>>> with your Docker image.<br>
>>>>>>> If I force OMP_NUM_THREADS=1, then I get the same convergence<br>
>>>>>>> as in the output file.<br>
>>>>>>><br>
>>>>>>> Thanks,<br>
>>>>>>> Pierre<br>
>>>>>>><br>
>>>>>>> (*) <a href="https://gitlab.com/petsc/petsc/-/merge_requests/3712" rel="noreferrer" target="_blank">https://gitlab.com/petsc/petsc/-/merge_requests/3712</a><br>
>>>>>>> <<a href="https://gitlab.com/petsc/petsc/-/merge_requests/3712" rel="noreferrer" target="_blank">https://gitlab.com/petsc/petsc/-/merge_requests/3712</a>><br>
>>>>>>><br>
>>>>>>>> Thank you for the Docker files, they were really useful.<br>
>>>>>>>> If you want to avoid oversubscription failures, you can edit<br>
>>>>>>>> the file /opt/openmpi-4.1.0/etc/openmpi-default-hostfile and<br>
>>>>>>>> append the line:<br>
>>>>>>>> localhost slots=12<br>
>>>>>>>> If you want to increase the timeout limit of PETSc test<br>
>>>>>>>> suite for each test, you can add the extra flag in your<br>
>>>>>>>> command line TIMEOUT=180 (default is 60, units are seconds).<br>
>>>>>>>><br>
>>>>>>>> Thanks, I’ll ping you on GitLab when I’ve got something<br>
>>>>>>>> ready for you to try,<br>
>>>>>>>> Pierre<br>
>>>>>>>><br>
>>>>>>>> <ompi.cxx><br>
>>>>>>>><br>
>>>>>>>>> On 12 Mar 2021, at 8:54 PM, Eric Chamberland<br>
>>>>>>>>> <<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a><br>
>>>>>>>>> <mailto:<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a>>> wrote:<br>
>>>>>>>>><br>
>>>>>>>>> Hi Pierre,<br>
>>>>>>>>><br>
>>>>>>>>> I now have a docker container reproducing the problems here.<br>
>>>>>>>>><br>
>>>>>>>>> Actually, if I look at<br>
>>>>>>>>> snes_tutorials-ex12_quad_singular_hpddm it fails like this:<br>
>>>>>>>>><br>
>>>>>>>>> not ok snes_tutorials-ex12_quad_singular_hpddm # Error code: 59<br>
>>>>>>>>> # Initial guess<br>
>>>>>>>>> # L_2 Error: 0.00803099<br>
>>>>>>>>> # Initial Residual<br>
>>>>>>>>> # L_2 Residual: 1.09057<br>
>>>>>>>>> # Au - b = Au + F(0)<br>
>>>>>>>>> # Linear L_2 Residual: 1.09057<br>
>>>>>>>>> # [d470c54ce086:14127] Read -1, expected 4096, errno = 1<br>
>>>>>>>>> # [d470c54ce086:14128] Read -1, expected 4096, errno = 1<br>
>>>>>>>>> # [d470c54ce086:14129] Read -1, expected 4096, errno = 1<br>
>>>>>>>>> # [3]PETSC ERROR:<br>
>>>>>>>>> ------------------------------------------------------------------------<br>
>>>>>>>>> # [3]PETSC ERROR: Caught signal number 11 SEGV:<br>
>>>>>>>>> Segmentation Violation, probably memory access out of range<br>
>>>>>>>>> # [3]PETSC ERROR: Try option -start_in_debugger or<br>
>>>>>>>>> -on_error_attach_debugger<br>
>>>>>>>>> # [3]PETSC ERROR: or see<br>
>>>>>>>>> <a href="https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a><br>
>>>>>>>>> <<a href="https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind" rel="noreferrer" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a>><br>
>>>>>>>>> # [3]PETSC ERROR: or try <a href="http://valgrind.org" rel="noreferrer" target="_blank">http://valgrind.org</a><br>
>>>>>>>>> <<a href="http://valgrind.org/" rel="noreferrer" target="_blank">http://valgrind.org/</a>> on GNU/linux and Apple Mac OS X to<br>
>>>>>>>>> find memory corruption errors<br>
>>>>>>>>> # [3]PETSC ERROR: likely location of problem given in stack<br>
>>>>>>>>> below<br>
>>>>>>>>> # [3]PETSC ERROR: --------------------- Stack Frames<br>
>>>>>>>>> ------------------------------------<br>
>>>>>>>>> # [3]PETSC ERROR: Note: The EXACT line numbers in the stack<br>
>>>>>>>>> are not available,<br>
>>>>>>>>> # [3]PETSC ERROR: INSTEAD the line number of the start of<br>
>>>>>>>>> the function<br>
>>>>>>>>> # [3]PETSC ERROR: is given.<br>
>>>>>>>>> # [3]PETSC ERROR: [3] buildTwo line 987<br>
>>>>>>>>> /opt/petsc-main/include/HPDDM_schwarz.hpp<br>
>>>>>>>>> # [3]PETSC ERROR: [3] next line 1130<br>
>>>>>>>>> /opt/petsc-main/include/HPDDM_schwarz.hpp<br>
>>>>>>>>> # [3]PETSC ERROR: --------------------- Error Message<br>
>>>>>>>>> --------------------------------------------------------------<br>
>>>>>>>>> # [3]PETSC ERROR: Signal received<br>
>>>>>>>>> # [3]PETSC ERROR: [0]PETSC ERROR:<br>
>>>>>>>>> ------------------------------------------------------------------------<br>
>>>>>>>>><br>
>>>>>>>>> also ex12_quad_hpddm_reuse_baij fails with a lot more "Read<br>
>>>>>>>>> -1, expected ..." which I don't know where they come from...?<br>
>>>>>>>>><br>
>>>>>>>>> Hypre (like in diff-snes_tutorials-ex56_hypre) is also<br>
>>>>>>>>> having DIVERGED_INDEFINITE_PC failures...<br>
>>>>>>>>><br>
>>>>>>>>> Please see the 3 attached docker files:<br>
>>>>>>>>><br>
>>>>>>>>> 1) fedora_mkl_and_devtools : the DockerFile which install<br>
>>>>>>>>> fedore 33 with gnu compilers and MKL and everything to develop.<br>
>>>>>>>>><br>
>>>>>>>>> 2) openmpi: the DockerFile to bluid OpenMPI<br>
>>>>>>>>><br>
>>>>>>>>> 3) petsc: The las DockerFile that build/install and test PETSc<br>
>>>>>>>>><br>
>>>>>>>>> I build the 3 like this:<br>
>>>>>>>>><br>
>>>>>>>>> docker build -t fedora_mkl_and_devtools -f<br>
>>>>>>>>> fedora_mkl_and_devtools .<br>
>>>>>>>>><br>
>>>>>>>>> docker build -t openmpi -f openmpi .<br>
>>>>>>>>><br>
>>>>>>>>> docker build -t petsc -f petsc .<br>
>>>>>>>>><br>
>>>>>>>>> Disclaimer: I am not a docker expert, so I may do things<br>
>>>>>>>>> that are not docker-stat-of-the-art but I am opened to<br>
>>>>>>>>> suggestions... ;)<br>
>>>>>>>>><br>
>>>>>>>>> I have just ran it on my portable (long) which have not<br>
>>>>>>>>> enough cores, so many more tests failed (should force<br>
>>>>>>>>> --oversubscribe but don't know how to). I will relaunch on<br>
>>>>>>>>> my workstation in a few minutes.<br>
>>>>>>>>><br>
>>>>>>>>> I will now test your branch! (sorry for the delay).<br>
>>>>>>>>><br>
>>>>>>>>> Thanks,<br>
>>>>>>>>><br>
>>>>>>>>> Eric<br>
>>>>>>>>><br>
>>>>>>>>> On 2021-03-11 9:03 a.m., Eric Chamberland wrote:<br>
>>>>>>>>>><br>
>>>>>>>>>> Hi Pierre,<br>
>>>>>>>>>><br>
>>>>>>>>>> ok, that's interesting!<br>
>>>>>>>>>><br>
>>>>>>>>>> I will try to build a docker image until tomorrow and give<br>
>>>>>>>>>> you the exact recipe to reproduce the bugs.<br>
>>>>>>>>>><br>
>>>>>>>>>> Eric<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> On 2021-03-11 2:46 a.m., Pierre Jolivet wrote:<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>>> On 11 Mar 2021, at 6:16 AM, Barry Smith<br>
>>>>>>>>>>>> <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a> <mailto:<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>> wrote:<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Eric,<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Sorry about not being more immediate. We still have<br>
>>>>>>>>>>>> this in our active email so you don't need to submit<br>
>>>>>>>>>>>> individual issues. We'll try to get to them as soon as<br>
>>>>>>>>>>>> we can.<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Indeed, I’m still trying to figure this out.<br>
>>>>>>>>>>> I realized that some of my configure flags were different<br>
>>>>>>>>>>> than yours, e.g., no --with-memalign.<br>
>>>>>>>>>>> I’ve also added SuperLU_DIST to my installation.<br>
>>>>>>>>>>> Still, I can’t reproduce any issue.<br>
>>>>>>>>>>> I will continue looking into this, it appears I’m seeing<br>
>>>>>>>>>>> some valgrind errors, but I don’t know if this is some<br>
>>>>>>>>>>> side effect of OpenMPI not being valgrind-clean (last<br>
>>>>>>>>>>> time I checked, there was no error with MPICH).<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Thank you for your patience,<br>
>>>>>>>>>>> Pierre<br>
>>>>>>>>>>><br>
>>>>>>>>>>> /usr/bin/gmake -f gmakefile test test-fail=1<br>
>>>>>>>>>>> Using MAKEFLAGS: test-fail=1<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex12_quad_hpddm_reuse_baij.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex12_quad_hpddm_reuse_baij<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex12_quad_hpddm_reuse_baij<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tests-ex33_superlu_dist_2.counts<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex33_superlu_dist_2<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex33_superlu_dist_2<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tests-ex49_superlu_dist.counts<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-0<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-0<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-1<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-0_conv-1<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-0<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-0<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-1<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-1herm-1_conv-1<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-0<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-0<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-1<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-0_conv-1<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-0<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-0<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-1<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex49_superlu_dist+nsize-4herm-1_conv-1<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex50_tut_2.counts<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex50_tut_2<br>
>>>>>>>>>>> ok diff-ksp_ksp_tutorials-ex50_tut_2<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tests-ex33_superlu_dist.counts<br>
>>>>>>>>>>> ok ksp_ksp_tests-ex33_superlu_dist<br>
>>>>>>>>>>> ok diff-ksp_ksp_tests-ex33_superlu_dist<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex56_hypre.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex56_hypre<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex56_hypre<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex56_2.counts<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex56_2<br>
>>>>>>>>>>> ok diff-ksp_ksp_tutorials-ex56_2<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex17_3d_q3_trig_elas.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex17_3d_q3_trig_elas<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex17_3d_q3_trig_elas<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5_superlu_dist_3.counts<br>
>>>>>>>>>>> not ok ksp_ksp_tutorials-ex5_superlu_dist_3 # Error code: 1<br>
>>>>>>>>>>> #srun: error: Unable to create step for job 1426755: More<br>
>>>>>>>>>>> processors requested than permitted<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex5_superlu_dist_3 # SKIP Command<br>
>>>>>>>>>>> failed so no diff<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5f_superlu_dist.counts<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex5f_superlu_dist # SKIP Fortran<br>
>>>>>>>>>>> required for this test<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex12_tri_parmetis_hpddm_baij.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex12_tri_parmetis_hpddm_baij<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex12_tri_parmetis_hpddm_baij<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex19_tut_3.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex19_tut_3<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex19_tut_3<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex17_3d_q3_trig_vlap.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex17_3d_q3_trig_vlap<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex17_3d_q3_trig_vlap<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5f_superlu_dist_3.counts<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex5f_superlu_dist_3 # SKIP Fortran<br>
>>>>>>>>>>> required for this test<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex19_superlu_dist.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex19_superlu_dist<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex19_superlu_dist<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex56_attach_mat_nearnullspace-1_bddc_approx_hypre.counts<br>
>>>>>>>>>>> ok<br>
>>>>>>>>>>> snes_tutorials-ex56_attach_mat_nearnullspace-1_bddc_approx_hypre<br>
>>>>>>>>>>> ok<br>
>>>>>>>>>>> diff-snes_tutorials-ex56_attach_mat_nearnullspace-1_bddc_approx_hypre<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex49_hypre_nullspace.counts<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex49_hypre_nullspace<br>
>>>>>>>>>>> ok diff-ksp_ksp_tutorials-ex49_hypre_nullspace<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex19_superlu_dist_2.counts<br>
>>>>>>>>>>> ok snes_tutorials-ex19_superlu_dist_2<br>
>>>>>>>>>>> ok diff-snes_tutorials-ex19_superlu_dist_2<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5_superlu_dist_2.counts<br>
>>>>>>>>>>> not ok ksp_ksp_tutorials-ex5_superlu_dist_2 # Error code: 1<br>
>>>>>>>>>>> #srun: error: Unable to create step for job 1426755: More<br>
>>>>>>>>>>> processors requested than permitted<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex5_superlu_dist_2 # SKIP Command<br>
>>>>>>>>>>> failed so no diff<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/snes_tutorials-ex56_attach_mat_nearnullspace-0_bddc_approx_hypre.counts<br>
>>>>>>>>>>> ok<br>
>>>>>>>>>>> snes_tutorials-ex56_attach_mat_nearnullspace-0_bddc_approx_hypre<br>
>>>>>>>>>>> ok<br>
>>>>>>>>>>> diff-snes_tutorials-ex56_attach_mat_nearnullspace-0_bddc_approx_hypre<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex64_1.counts<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex64_1<br>
>>>>>>>>>>> ok diff-ksp_ksp_tutorials-ex64_1<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5_superlu_dist.counts<br>
>>>>>>>>>>> not ok ksp_ksp_tutorials-ex5_superlu_dist # Error code: 1<br>
>>>>>>>>>>> #srun: error: Unable to create step for job 1426755: More<br>
>>>>>>>>>>> processors requested than permitted<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex5_superlu_dist # SKIP Command<br>
>>>>>>>>>>> failed so no diff<br>
>>>>>>>>>>> TEST<br>
>>>>>>>>>>> arch-linux2-c-opt-ompi/tests/counts/ksp_ksp_tutorials-ex5f_superlu_dist_2.counts<br>
>>>>>>>>>>> ok ksp_ksp_tutorials-ex5f_superlu_dist_2 # SKIP Fortran<br>
>>>>>>>>>>> required for this test<br>
>>>>>>>>>>><br>
>>>>>>>>>>>> Barry<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>>> On Mar 10, 2021, at 11:03 PM, Eric Chamberland<br>
>>>>>>>>>>>>> <<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a><br>
>>>>>>>>>>>>> <mailto:<a href="mailto:Eric.Chamberland@giref.ulaval.ca" target="_blank">Eric.Chamberland@giref.ulaval.ca</a>>> wrote:<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Barry,<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> to get a some follow up on --with-openmp=1 failures,<br>
>>>>>>>>>>>>> shall I open gitlab issues for:<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> a) all hypre failures giving DIVERGED_INDEFINITE_PC<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> b) all superlu_dist failures giving different results<br>
>>>>>>>>>>>>> with initia and "Exceeded timeout limit of 60 s"<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> c) hpddm failures "free(): invalid next size (fast)"<br>
>>>>>>>>>>>>> and "Segmentation Violation"<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> d) all tao's "Exceeded timeout limit of 60 s"<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> I don't see how I could do all these debugging by myself...<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Eric<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>> -- <br>
>>>>>>>>>> Eric Chamberland, ing., M. Ing<br>
>>>>>>>>>> Professionnel de recherche<br>
>>>>>>>>>> GIREF/Université Laval<br>
>>>>>>>>>> (418) 656-2131 poste 41 22 42<br>
>>>>>>>>> -- <br>
>>>>>>>>> Eric Chamberland, ing., M. Ing<br>
>>>>>>>>> Professionnel de recherche<br>
>>>>>>>>> GIREF/Université Laval<br>
>>>>>>>>> (418) 656-2131 poste 41 22 42<br>
>>>>>>>>> <fedora_mkl_and_devtools.txt><openmpi.txt><petsc.txt><br>
>>>>>>>><br>
>>>>>>><br>
>>>>>><br>
>>>>> -- <br>
>>>>> Eric Chamberland, ing., M. Ing<br>
>>>>> Professionnel de recherche<br>
>>>>> GIREF/Université Laval<br>
>>>>> (418) 656-2131 poste 41 22 42<br>
>>>><br>
>>> -- <br>
>>> Eric Chamberland, ing., M. Ing<br>
>>> Professionnel de recherche<br>
>>> GIREF/Université Laval<br>
>>> (418) 656-2131 poste 41 22 42<br>
>><br>
> -- <br>
> Eric Chamberland, ing., M. Ing<br>
> Professionnel de recherche<br>
> GIREF/Université Laval<br>
> (418) 656-2131 poste 41 22 42<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>