<div dir="ltr">Do we have a policy on this? I know some tests say they are serial.<div>Maybe just say that tests are only supported for the parameters in the test.</div><div><br></div><div>Yuan: at the bottom of all tests and tutorials are example input arguments and parallel run configurations.<br></div><div><br></div><div>This tutorial is very rudimentary as you can see by:</div><div><br></div><div>!/*TEST<br>!<br>! test:<br>! suffix: 0<br>!<br>!TEST*/<br></div><div><br></div><div>If you are looking for a parallel test, find one that has something like this:</div><div><br></div><div> test:<br> suffix: mesh_2<br><b> nsize: 2<br></b> requires: exodusii<br> args: -dm_distribute -petscpartitioner_type simple -dm_plex_filename ${wPETSC_DIR}/share/petsc/datafiles/meshes/sevenside-quad-15.exo -orth_qual_atol 0.95<br>TEST*/<br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 30, 2021 at 12:17 PM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Yes, it is a serial test.<div><br></div><div> Thanks,</div><div><br></div><div> Matt</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 30, 2021 at 9:38 AM 袁煕 <<a href="mailto:yuanxi@advancesoft.jp" target="_blank">yuanxi@advancesoft.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Thank you for your reply.<div><br></div><div>I have solved the problem by modifying</div><div>----------------------------------------------------</div><div>call DMPlexCreateFromDAG(dm, depth, numPoints, coneSize, cones,coneOrientations, vertexCoords, ierr);CHKERRA(ierr)<br></div><div>----------------------------------------------------</div><div>into </div><div>-----------------------------------------------------</div><div>numPoints1 = [0, 0, 0, 0]<br></div><div>if (rank == 0) then<br> call DMPlexCreateFromDAG(dm, depth, numPoints, coneSize, cones,coneOrientations, vertexCoords, ierr);CHKERRA(ierr)<br> else<br> call DMPlexCreateFromDAG(dm, 3, numPoints1, PETSC_NULL_INTEGER, PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, PETSC_NULL_REAL, ierr)<br>endif<br></div><div>----------------------------------------------------</div><div><br></div><div>The result obtained as follows</div><div><br></div><div>DM Object: testplex 2 MPI processes<br> type: plex<br>testplex in 3 dimensions:<br> 0-cells: 12 0<br> 1-cells: 20 0<br> 2-cells: 11 0<br> 3-cells: 2 0<br>Labels:<br> celltype: 4 strata with value/size (0 (12), 7 (2), 4 (11), 1 (20))<br> depth: 4 strata with value/size (0 (12), 1 (20), 2 (11), 3 (2))<br>cell: 0 volume: 0.5000 centroid: -0.2500 0.5000 0.5000<br>cell: 1 volume: 0.5000 centroid: 0.2500 0.5000 0.5000<br><br>===================================================================================<br>= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br>= RANK 0 PID 15428 RUNNING AT DESKTOP-9ITFSBM<br>= KILLED BY SIGNAL: 9 (Killed)<br>===================================================================================<br></div><div><br></div><div>There is still problem left. I like it relevent</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">2021年10月30日(土) 21:51 Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Ah, I can reproduce this error with debugging turned on.<div>This test is not a parallel test, but it does not say that serial is a requirement.</div><div>So there is a problem here.</div><div>Anyone?</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 30, 2021 at 8:29 AM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">08:27 adams/pcksp-batch-kokkos *= summit:/gpfs/alpine/csc314/scratch/adams/petsc/src/dm/impls/plex/tutorials$ make PETSC_ARCH=arch-summit-opt-gnu-kokkos-cuda ex3f90<br>mpifort -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -I/gpfs/alpine/csc314/scratch/adams/petsc/include -I/gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-kokkos-cuda/include -I/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/hdf5-1.10.7-yxvwkhm4nhgezbl2mwzdruwoaiblt6q2/include -I/sw/summit/cuda/11.0.3/include ex3f90.F90 -Wl,-rpath,/gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-kokkos-cuda/lib -L/gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-kokkos-cuda/lib -Wl,-rpath,/gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-kokkos-cuda/lib -L/gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-kokkos-cuda/lib -L/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/netlib-lapack-3.9.1-t2a6tcso5tkezcjmfrqvqi2cpary7kgx/lib64 -Wl,-rpath,/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/hdf5-1.10.7-yxvwkhm4nhgezbl2mwzdruwoaiblt6q2/lib -L/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/hdf5-1.10.7-yxvwkhm4nhgezbl2mwzdruwoaiblt6q2/lib -Wl,-rpath,/sw/summit/cuda/11.0.3/lib64 -L/sw/summit/cuda/11.0.3/lib64 -L/sw/summit/cuda/11.0.3/lib64/stubs -Wl,-rpath,/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/spectrum-mpi-10.4.0.3-20210112-6jbupg3thjwhsabgevk6xmwhd2bbyxdc/lib -L/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/spectrum-mpi-10.4.0.3-20210112-6jbupg3thjwhsabgevk6xmwhd2bbyxdc/lib -Wl,-rpath,/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib/gcc/powerpc64le-unknown-linux-gnu/9.1.0 -L/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib/gcc/powerpc64le-unknown-linux-gnu/9.1.0 -Wl,-rpath,/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib/gcc -L/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib/gcc -Wl,-rpath,/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/netlib-lapack-3.9.1-t2a6tcso5tkezcjmfrqvqi2cpary7kgx/lib64 -Wl,-rpath,/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib64 -L/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib64 -Wl,-rpath,/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-8.3.1/darshan-runtime-3.3.0-mu6tnxlhxfplrq3srkkgi5dvly6wenwy/lib -L/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-8.3.1/darshan-runtime-3.3.0-mu6tnxlhxfplrq3srkkgi5dvly6wenwy/lib -Wl,-rpath,/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib -L/autofs/nccs-svm1_sw/summit/gcc/9.1.0-alpha+20190716/lib -lpetsc -lkokkoskernels -lkokkoscontainers -lkokkoscore -lp4est -lsc -lblas -llapack -lhdf5_hl -lhdf5 -lm -lz -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -lcuda -lstdc++ -ldl -lmpiprofilesupport -lmpi_ibm_usempif08 -lmpi_ibm_usempi_ignore_tkr -lmpi_ibm_mpifh -lmpi_ibm -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lquadmath -lstdc++ -ldl -o ex3f90<br>08:27 adams/pcksp-batch-kokkos *= summit:/gpfs/alpine/csc314/scratch/adams/petsc/src/dm/impls/plex/tutorials$ jsrun -n 2 -g 1 ./ex3f90<br>DM Object: testplex 2 MPI processes<br> type: plex<br>testplex in 3 dimensions:<br> 0-cells: 12 12<br> 1-cells: 20 20<br> 2-cells: 11 11<br> 3-cells: 2 2<br>Labels:<br> celltype: 4 strata with value/size (0 (12), 7 (2), 4 (11), 1 (20))<br> depth: 4 strata with value/size (0 (12), 1 (20), 2 (11), 3 (2))<br>cell: 0 volume: 0.5000 centroid: -0.2500 0.5000 0.5000<br>cell: 1 volume: 0.5000 centroid: 0.2500 0.5000 0.5000<br>cell: 0 volume: 0.5000 centroid: -0.2500 0.5000 0.5000<br>cell: 1 volume: 0.5000 centroid: 0.2500 0.5000 0.5000<br>08:28 adams/pcksp-batch-kokkos *= summit:/gpfs/alpine/csc314/scratch/adams/pets<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Oct 29, 2021 at 10:41 PM 袁煕 <<a href="mailto:yuanxi@advancesoft.jp" target="_blank">yuanxi@advancesoft.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Thanks,
Mark.<div><br></div><div>I do what you suggested but nothing changes. Besides, from your compile history and result, </div><div><br></div><div>- you use gfortran with no MPI library, not mpif90</div><div>- two CPUs gives exactly the same result</div><div>- The first line of the DMView output should be "DM Object: testplex 2 MPI processes", not "DM Object: testplex 1 MPI processes", when you use 2CPUs</div><div><br></div><div>It seems like you did not use MPI but just two CPUs do exactly the same thing..</div><div><br></div><div>Best regards,</div><div><br></div><div>Yuan</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">2021年10月29日(金) 20:22 Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">This works for me (appended) using an up to date version of PETSc.<div><br></div><div>I would delete the architecture director and reconfigure, and make all, and try again.</div><div><br></div><div>Next, you seem to be using git. Use the 'main' branch and try again.</div><div><br></div><div>Mark</div><div><br></div><div>(base) 07:09 adams/swarm-omp-pc *= ~/Codes/petsc$ cd src/dm/impls/plex/tutorials/<br>(base) 07:16 adams/swarm-omp-pc *= ~/Codes/petsc/src/dm/impls/plex/tutorials$ make PETSC_DIR=/Users/markadams/Codes/petsc PETSC_ARCH=arch-macosx-gnu-g ex3f90<br>gfortran-11 -Wl,-bind_at_load -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -I/Users/markadams/Codes/petsc/include -I/Users/markadams/Codes/petsc/arch-macosx-gnu-g/include ex3f90.F90 -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -Wl,-rpath,/usr/local/Cellar/gcc/11.2.0/lib/gcc/11/gcc/x86_64-apple-darwin20/11.2.0 -L/usr/local/Cellar/gcc/11.2.0/lib/gcc/11/gcc/x86_64-apple-darwin20/11.2.0 -Wl,-rpath,/usr/local/Cellar/gcc/11.2.0/lib/gcc/11 -L/usr/local/Cellar/gcc/11.2.0/lib/gcc/11 -lpetsc -lp4est -lsc -llapack -lblas -lhdf5_hl -lhdf5 -lmetis -lz -lstdc++ -ldl -lgcc_s.1 -lgfortran -lquadmath -lm -lquadmath -lstdc++ -ldl -lgcc_s.1 -o ex3f90<br>(base) 07:16 adams/swarm-omp-pc *= ~/Codes/petsc/src/dm/impls/plex/tutorials$ mpirun -np 2 ./ex3f90<br>DM Object: testplex 1 MPI processes<br> type: plex<br>testplex in 3 dimensions:<br> 0-cells: 12<br> 1-cells: 20<br> 2-cells: 11<br> 3-cells: 2<br>Labels:<br> celltype: 4 strata with value/size (0 (12), 7 (2), 4 (11), 1 (20))<br> depth: 4 strata with value/size (0 (12), 1 (20), 2 (11), 3 (2))<br>DM Object: testplex 1 MPI processes<br> type: plex<br>testplex in 3 dimensions:<br> 0-cells: 12<br> 1-cells: 20<br> 2-cells: 11<br> 3-cells: 2<br>Labels:<br> celltype: 4 strata with value/size (0 (12), 7 (2), 4 (11), 1 (20))<br> depth: 4 strata with value/size (0 (12), 1 (20), 2 (11), 3 (2))<br>cell: 0 volume: 0.5000 centroid: -0.2500 0.5000 0.5000<br>cell: 1 volume: 0.5000 centroid: 0.2500 0.5000 0.5000<br>cell: 0 volume: 0.5000 centroid: -0.2500 0.5000 0.5000<br>cell: 1 volume: 0.5000 centroid: 0.2500 0.5000 0.5000<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Oct 29, 2021 at 6:11 AM 袁煕 <<a href="mailto:yuanxi@advancesoft.jp" target="_blank">yuanxi@advancesoft.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi,<br><div><br></div><div>I have tried the test case ex3f90 in the folder \src\dm\impls\plex\tutorials to run in parallel but found it fails. When I run it in 1 CPU by</div><div><br></div><div>- mpirun -np 1 ./ex3f90</div><div><br></div><div>Everything seems OK. But when run it in 2 CPU by</div><div><br></div><div>- mpirun -np 2 ./ex3f90<br></div><div><br></div><div>I got the following error message</div><div><br></div><div>[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------<br>[0]PETSC ERROR: Object is in wrong state<br>[0]PETSC ERROR: This DMPlex is distributed but its PointSF has no graph set<br>[0]PETSC ERROR: See <a href="https://petsc.org/release/faq/" target="_blank">https://petsc.org/release/faq/</a> for trouble shooting.<br>[0]PETSC ERROR: Petsc Development GIT revision: v3.16.0-248-ge617e6467c GIT Date: 2021-10-19 23:11:25 -0500<br>[0]PETSC ERROR: ./ex3f90 on a named pc-010-088 by Fri Oct 29 18:48:54 2021<br>[0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpiifort --with-fortran-bindings=1 --with-debugging=0 --with-blaslapack-dir=/opt/intel/oneapi/mkl/2021.4.0 --with-mkl_pardiso-dir=/opt/intel/oneapi/mkl/2021.4.0 --download-metis=1 --download-parmetis=1 --download-cmake --force --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 --download-hypre=1 --download-ml=1 --with-debugging=yes --prefix=/home/yuanxi<br>[0]PETSC ERROR: #1 DMPlexCheckPointSF() at /home/yuanxi/myprograms/petsc/src/dm/impls/plex/plex.c:8626<br>[0]PETSC ERROR: #2 DMPlexOrientInterface_Internal() at /home/yuanxi/myprograms/petsc/src/dm/impls/plex/plexinterpolate.c:595<br>[0]PETSC ERROR: #3 DMPlexInterpolate() at /home/yuanxi/myprograms/petsc/src/dm/impls/plex/plexinterpolate.c:1357<br>[0]PETSC ERROR: #4 User provided function() at User file:0<br>Abort(73) on node 0 (rank 0 in comm 16): application called MPI_Abort(MPI_COMM_SELF, 73) - process 0<br></div><div>------------------------------------------------------------------------------------------------------------------------------------</div><div><br></div><div>It fails in calling DMPlexInterpolate. Maybe this program is not considered to be run in parallel. But if I wish to do so, how should I modify it to let it run on multiple CPUs?</div><div><br></div><div>Much thanks for your help</div><div><br></div><div>Yuan</div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div>
</blockquote></div>