<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Dear All,</p>
<p>I have a question regarding strange partition problem in PETSc
3.11 version. The problem does not exist on my local workstation.
However, on a cluster with different PETSc versions, the partition
seems quite different, as you can find in the figure below, which
is tested with 160 processors. The color means the processor owns
that subdomain. In this layered prism mesh, there are 40 layers
from bottom to top and each layer has around 20k nodes. The
natural order of nodes is also layered from bottom to top. <br>
</p>
<p>The left partition (PETSc 3.10 and earlier) looks good with
minimum number of ghost nodes while the right one (PETSc 3.11)
looks weired with huge number of ghost nodes. Looks like the right
one uses partition layer by layer. This problem exists on a a
cluster but not on my local workstation for the same PETSc version
(with different compiler and MPI). Other than the difference in
partition and efficiency, the simulation results are the same. <br>
</p>
<br>
<p><img moz-do-not-send="false"
src="cid:part1.CCBC734D.B18C826F@gmail.com" alt="partition
difference" width="1054" height="633"></p>
<p>Below is PETSc configuration on three machine:</p>
<p>Local workstation (works fine): ./configure --with-cc=gcc
--with-cxx=g++ --with-fc=gfortran --download-mpich
--download-scalapack --download-parmetis --download-metis
--download-ptscotch --download-fblaslapack --download-hypre
--download-superlu_dist --download-hdf5=yes --download-ctetgen
--with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3
--with-cxx-dialect=C++11</p>
<p>Cluster with PETSc 3.9.3 (works fine):
--prefix=/scinet/niagara/software/2018a/opt/intel-2018.2-intelmpi-2018.2/petsc/3.9.3
CC=mpicc CXX=mpicxx F77=mpif77 F90=mpif90 FC=mpifc
COPTFLAGS="-march=native -O2" CXXOPTFLAGS="-march=native -O2"
FOPTFLAGS="-march=native -O2" --download-chaco=1
--download-hypre=1 --download-metis=1 --download-ml=1
--download-mumps=1 --download-parmetis=1 --download-plapack=1
--download-prometheus=1 --download-ptscotch=1 --download-scotch=1
--download-sprng=1 --download-superlu=1 --download-superlu_dist=1
--download-triangle=1 --with-avx512-kernels=1
--with-blaslapack-dir=/scinet/niagara/intel/2018.2/compilers_and_libraries_2018.2.199/linux/mkl
--with-debugging=0 --with-hdf5=1
--with-mkl_pardiso-dir=/scinet/niagara/intel/2018.2/compilers_and_libraries_2018.2.199/linux/mkl
--with-scalapack=1
--with-scalapack-lib="[/scinet/niagara/intel/2018.2/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so,/scinet/niagara/intel/2018.2/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so]"
--with-x=0</p>
<p>Cluster with PETSc 3.11.3 (looks weired):
--prefix=/scinet/niagara/software/2019b/opt/intel-2019u4-intelmpi-2019u4/petsc/3.11.3
CC=mpicc CXX=mpicxx F77=mpif77 F90=mpif90 FC=mpifc
COPTFLAGS="-march=native -O2" CXXOPTFLAGS="-march=native -O2"
FOPTFLAGS="-march=native -O2" --download-chaco=1 --download-hdf5=1
--download-hypre=1 --download-metis=1 --download-ml=1
--download-mumps=1 --download-parmetis=1 --download-plapack=1
--download-prometheus=1 --download-ptscotch=1 --download-scotch=1
--download-sprng=1 --download-superlu=1 --download-superlu_dist=1
--download-triangle=1 --with-avx512-kernels=1
--with-blaslapack-dir=/scinet/intel/2019u4/compilers_and_libraries_2019.4.243/linux/mkl
--with-cxx-dialect=C++11 --with-debugging=0
--with-mkl_pardiso-dir=/scinet/intel/2019u4/compilers_and_libraries_2019.4.243/linux/mkl
--with-scalapack=1
--with-scalapack-lib="[/scinet/intel/2019u4/compilers_and_libraries_2019.4.243/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so,/scinet/intel/2019u4/compilers_and_libraries_2019.4.243/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so]"
--with-x=0</p>
<p>And the partition is used by default dmplex distribution.</p>
<p> !c distribute mesh over processes<br>
call
DMPlexDistribute(dmda_flow%da,stencil_width, &<br>
PETSC_NULL_SF, &<br>
PETSC_NULL_OBJECT, &<br>
distributedMesh,ierr)<br>
CHKERRQ(ierr)</p>
<p>Any idea on this strange problem? </p>
<p>Thanks,</p>
<p>Danyang<br>
</p>
</body>
</html>