From alexlindsay239 at gmail.com Sun Mar 1 13:41:53 2020 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Sun, 1 Mar 2020 11:41:53 -0800 Subject: [petsc-users] Scraping MPI information from PETSc conf In-Reply-To: References: Message-ID: Is it safe to assume that mpicxx will always add the requisite include and library flags? Are there any/many implementations that do not take the -show flag? > On Feb 27, 2020, at 7:15 PM, Satish Balay wrote: > > ?Not really useful for autotools - but we print the mpi.h used during > build in make.log > > Using mpi.h: # 1 "/home/petsc/soft/mpich-3.3b1/include/mpi.h" 1 > > I guess the same code [using a petsc makefile] - can be scripted and > parsed to get the PATH to compare in autotools. > > However the current version check [below] is likely the best way. Our > prior check was deemed too strict - for ex: when linux distros updated > MPI packages with a bug fixed version [without API change] - our prior > check flagged this as incompatible - so we had to change it. > > Satish > >> On Thu, 27 Feb 2020, Jed Brown wrote: >> >> If determining mpicc is sufficient, this will work >> >> pkg-config --var=ccompiler PETSc >> >> We also define >> >> $ grep NUMVERSION mpich-optg/include/petscconf.h >> #define PETSC_HAVE_MPICH_NUMVERSION 30302300 >> >> or >> >> $ grep OMPI_ ompi-optg/include/petscconf.h >> #define PETSC_HAVE_OMPI_MAJOR_VERSION 4 >> #define PETSC_HAVE_OMPI_MINOR_VERSION 0 >> #define PETSC_HAVE_OMPI_RELEASE_VERSION 2 >> >> which PETSc uses to raise a compile-time error if it believes you're >> compiling PETSc code using an incompatible MPI. >> >> Note that some of this is hidden in the environment on Cray systems, for >> example, where CC=cc regardless of what compiler you're actually using. >> >> Alexander Lindsay writes: >> >>> What's the cleanest way to determine the MPI install used to build PETSc? >>> We are configuring a an MPI-based C++ library with autotools that will >>> eventually be used by libMesh, and we'd like to make sure that this library >>> (as well as libMesh) uses the same MPI that PETSc used or at worst detect >>> our own and then error/warn the user if its an MPI that differs from the >>> one used to build PETc. It seems like the only path info that shows up is >>> in MPICXX_SHOW, PETSC_EXTERNAL_LIB_BASIC, and PETSC_WITH_EXTERNAL_LIB (I'm >>> looking in petscvariables). I'm willing to learn the m4/portable shell >>> built-ins necessary to parse those variables and come out with an mpi-dir, >>> but before doing that figured I'd ask here and see if I'm missing something >>> easier. >>> >>> Alex >> > From jed at jedbrown.org Sun Mar 1 14:05:00 2020 From: jed at jedbrown.org (Jed Brown) Date: Sun, 01 Mar 2020 13:05:00 -0700 Subject: [petsc-users] Scraping MPI information from PETSc conf In-Reply-To: References: Message-ID: <87blpfn8nn.fsf@jedbrown.org> Alexander Lindsay writes: > Is it safe to assume that mpicxx will always add the requisite include > and library flags? It it's mpicxx, but such wrappers are not necessarily used. The compile-time macro checks are more reliable than implementing a bunch of cases for MPI library layout and identification. > Are there any/many implementations that do not take the -show flag? > >> On Feb 27, 2020, at 7:15 PM, Satish Balay wrote: >> >> ?Not really useful for autotools - but we print the mpi.h used during >> build in make.log >> >> Using mpi.h: # 1 "/home/petsc/soft/mpich-3.3b1/include/mpi.h" 1 >> >> I guess the same code [using a petsc makefile] - can be scripted and >> parsed to get the PATH to compare in autotools. >> >> However the current version check [below] is likely the best way. Our >> prior check was deemed too strict - for ex: when linux distros updated >> MPI packages with a bug fixed version [without API change] - our prior >> check flagged this as incompatible - so we had to change it. >> >> Satish >> >>> On Thu, 27 Feb 2020, Jed Brown wrote: >>> >>> If determining mpicc is sufficient, this will work >>> >>> pkg-config --var=ccompiler PETSc >>> >>> We also define >>> >>> $ grep NUMVERSION mpich-optg/include/petscconf.h >>> #define PETSC_HAVE_MPICH_NUMVERSION 30302300 >>> >>> or >>> >>> $ grep OMPI_ ompi-optg/include/petscconf.h >>> #define PETSC_HAVE_OMPI_MAJOR_VERSION 4 >>> #define PETSC_HAVE_OMPI_MINOR_VERSION 0 >>> #define PETSC_HAVE_OMPI_RELEASE_VERSION 2 >>> >>> which PETSc uses to raise a compile-time error if it believes you're >>> compiling PETSc code using an incompatible MPI. >>> >>> Note that some of this is hidden in the environment on Cray systems, for >>> example, where CC=cc regardless of what compiler you're actually using. >>> >>> Alexander Lindsay writes: >>> >>>> What's the cleanest way to determine the MPI install used to build PETSc? >>>> We are configuring a an MPI-based C++ library with autotools that will >>>> eventually be used by libMesh, and we'd like to make sure that this library >>>> (as well as libMesh) uses the same MPI that PETSc used or at worst detect >>>> our own and then error/warn the user if its an MPI that differs from the >>>> one used to build PETc. It seems like the only path info that shows up is >>>> in MPICXX_SHOW, PETSC_EXTERNAL_LIB_BASIC, and PETSC_WITH_EXTERNAL_LIB (I'm >>>> looking in petscvariables). I'm willing to learn the m4/portable shell >>>> built-ins necessary to parse those variables and come out with an mpi-dir, >>>> but before doing that figured I'd ask here and see if I'm missing something >>>> easier. >>>> >>>> Alex >>> >> From balay at mcs.anl.gov Sun Mar 1 14:13:27 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 1 Mar 2020 14:13:27 -0600 Subject: [petsc-users] Scraping MPI information from PETSc conf In-Reply-To: References: Message-ID: On Sun, 1 Mar 2020, Alexander Lindsay wrote: > Is it safe to assume that mpicxx will always add the requisite include and library flags? Yes. There are always broken compilers. Similarly - there can always be broken mpi compilers aswell. > Are there any/many implementations that do not take the -show flag? For ex: cray. [they don't have 'mpicc', and no 'mpicc -show' or 'cc -show' ] Satish > > > On Feb 27, 2020, at 7:15 PM, Satish Balay wrote: > > > > ?Not really useful for autotools - but we print the mpi.h used during > > build in make.log > > > > Using mpi.h: # 1 "/home/petsc/soft/mpich-3.3b1/include/mpi.h" 1 > > > > I guess the same code [using a petsc makefile] - can be scripted and > > parsed to get the PATH to compare in autotools. > > > > However the current version check [below] is likely the best way. Our > > prior check was deemed too strict - for ex: when linux distros updated > > MPI packages with a bug fixed version [without API change] - our prior > > check flagged this as incompatible - so we had to change it. > > > > Satish > > > >> On Thu, 27 Feb 2020, Jed Brown wrote: > >> > >> If determining mpicc is sufficient, this will work > >> > >> pkg-config --var=ccompiler PETSc > >> > >> We also define > >> > >> $ grep NUMVERSION mpich-optg/include/petscconf.h > >> #define PETSC_HAVE_MPICH_NUMVERSION 30302300 > >> > >> or > >> > >> $ grep OMPI_ ompi-optg/include/petscconf.h > >> #define PETSC_HAVE_OMPI_MAJOR_VERSION 4 > >> #define PETSC_HAVE_OMPI_MINOR_VERSION 0 > >> #define PETSC_HAVE_OMPI_RELEASE_VERSION 2 > >> > >> which PETSc uses to raise a compile-time error if it believes you're > >> compiling PETSc code using an incompatible MPI. > >> > >> Note that some of this is hidden in the environment on Cray systems, for > >> example, where CC=cc regardless of what compiler you're actually using. > >> > >> Alexander Lindsay writes: > >> > >>> What's the cleanest way to determine the MPI install used to build PETSc? > >>> We are configuring a an MPI-based C++ library with autotools that will > >>> eventually be used by libMesh, and we'd like to make sure that this library > >>> (as well as libMesh) uses the same MPI that PETSc used or at worst detect > >>> our own and then error/warn the user if its an MPI that differs from the > >>> one used to build PETc. It seems like the only path info that shows up is > >>> in MPICXX_SHOW, PETSC_EXTERNAL_LIB_BASIC, and PETSC_WITH_EXTERNAL_LIB (I'm > >>> looking in petscvariables). I'm willing to learn the m4/portable shell > >>> built-ins necessary to parse those variables and come out with an mpi-dir, > >>> but before doing that figured I'd ask here and see if I'm missing something > >>> easier. > >>> > >>> Alex > >> > > > From knepley at gmail.com Sun Mar 1 18:12:29 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 1 Mar 2020 19:12:29 -0500 Subject: [petsc-users] Scraping MPI information from PETSc conf In-Reply-To: References: Message-ID: On Sun, Mar 1, 2020 at 2:43 PM Alexander Lindsay wrote: > Is it safe to assume that mpicxx will always add the requisite include and > library flags? Yes, that is the contract. > Are there any/many implementations that do not take the -show flag? > I thought only MPICH took that flag. Thanks, Matt > > On Feb 27, 2020, at 7:15 PM, Satish Balay wrote: > > > > ?Not really useful for autotools - but we print the mpi.h used during > > build in make.log > > > > Using mpi.h: # 1 "/home/petsc/soft/mpich-3.3b1/include/mpi.h" 1 > > > > I guess the same code [using a petsc makefile] - can be scripted and > > parsed to get the PATH to compare in autotools. > > > > However the current version check [below] is likely the best way. Our > > prior check was deemed too strict - for ex: when linux distros updated > > MPI packages with a bug fixed version [without API change] - our prior > > check flagged this as incompatible - so we had to change it. > > > > Satish > > > >> On Thu, 27 Feb 2020, Jed Brown wrote: > >> > >> If determining mpicc is sufficient, this will work > >> > >> pkg-config --var=ccompiler PETSc > >> > >> We also define > >> > >> $ grep NUMVERSION mpich-optg/include/petscconf.h > >> #define PETSC_HAVE_MPICH_NUMVERSION 30302300 > >> > >> or > >> > >> $ grep OMPI_ ompi-optg/include/petscconf.h > >> #define PETSC_HAVE_OMPI_MAJOR_VERSION 4 > >> #define PETSC_HAVE_OMPI_MINOR_VERSION 0 > >> #define PETSC_HAVE_OMPI_RELEASE_VERSION 2 > >> > >> which PETSc uses to raise a compile-time error if it believes you're > >> compiling PETSc code using an incompatible MPI. > >> > >> Note that some of this is hidden in the environment on Cray systems, for > >> example, where CC=cc regardless of what compiler you're actually using. > >> > >> Alexander Lindsay writes: > >> > >>> What's the cleanest way to determine the MPI install used to build > PETSc? > >>> We are configuring a an MPI-based C++ library with autotools that will > >>> eventually be used by libMesh, and we'd like to make sure that this > library > >>> (as well as libMesh) uses the same MPI that PETSc used or at worst > detect > >>> our own and then error/warn the user if its an MPI that differs from > the > >>> one used to build PETc. It seems like the only path info that shows up > is > >>> in MPICXX_SHOW, PETSC_EXTERNAL_LIB_BASIC, and PETSC_WITH_EXTERNAL_LIB > (I'm > >>> looking in petscvariables). I'm willing to learn the m4/portable shell > >>> built-ins necessary to parse those variables and come out with an > mpi-dir, > >>> but before doing that figured I'd ask here and see if I'm missing > something > >>> easier. > >>> > >>> Alex > >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sun Mar 1 18:17:57 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 1 Mar 2020 18:17:57 -0600 Subject: [petsc-users] Scraping MPI information from PETSc conf In-Reply-To: References: Message-ID: On Sun, 1 Mar 2020, Matthew Knepley wrote: > > Are there any/many implementations that do not take the -show flag? > > > > I thought only MPICH took that flag. MPICH, OpenMPI, and derivatives [IntelMPI, MVAPICH] - perhaps more.. Satish From alexlindsay239 at gmail.com Sun Mar 1 20:26:00 2020 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Sun, 1 Mar 2020 19:26:00 -0700 Subject: [petsc-users] Scraping MPI information from PETSc conf In-Reply-To: References: Message-ID: Alright, I think the version checking info is all that I need. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Mon Mar 2 17:35:59 2020 From: jczhang at mcs.anl.gov (Junchao Zhang) Date: Mon, 2 Mar 2020 17:35:59 -0600 Subject: [petsc-users] Memory leak at GPU when updating matrix of type mpiaijcusparse (CUDA) In-Reply-To: References: Message-ID: Jorge, I found multiple problems in petsc with your example. I have a fix at https://gitlab.com/petsc/petsc/-/merge_requests/2575 If everything goes well, it will be in maint and master in a few days. Thanks for reporting the problem. --Junchao Zhang On Fri, Feb 28, 2020 at 12:13 PM Junchao Zhang wrote: > I will take a look at it and get back to you. Thanks. > > On Fri, Feb 28, 2020, 7:29 AM jordic wrote: > >> Dear all, >> >> the following simple program: >> >> >> ////////////////////////////////////////////////////////////////////////////////////// >> >> #include >> >> PetscInt ierr=0; >> int main(int argc,char **argv) >> { >> MPI_Comm comm; >> PetscMPIInt rank,size; >> >> PetscInitialize(&argc,&argv,NULL,help);if (ierr) return ierr; >> comm = PETSC_COMM_WORLD; >> MPI_Comm_rank(comm,&rank); >> MPI_Comm_size(comm,&size); >> >> Mat A; >> MatCreate(comm, &A); >> MatSetSizes(A, 1, 1, PETSC_DETERMINE, PETSC_DETERMINE); >> MatSetFromOptions(A); >> PetscInt dnz=1, onz=0; >> MatMPIAIJSetPreallocation(A, 0, &dnz, 0, &onz); >> MatSetOption(A, MAT_NO_OFF_PROC_ENTRIES, PETSC_TRUE); >> MatSetOption(A, MAT_IGNORE_ZERO_ENTRIES, PETSC_TRUE); >> PetscInt igid=rank, jgid=rank; >> PetscScalar value=rank+1.0; >> >> // for(int i=0; i<10; ++i) >> for(;;) //infinite loop >> { >> MatSetValue(A, igid, jgid, value, INSERT_VALUES); >> MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); >> MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); >> } >> MatDestroy(&A); >> PetscFinalize(); >> return ierr; >> } >> >> >> ////////////////////////////////////////////////////////////////////////////////////// >> >> creates a simple diagonal matrix with one value per mpi-core. If the type >> of the matrix is "mpiaij" (-mat_type mpiaij) there is no problem but >> with "mpiaijcusparse" (-mat_type mpiaijcusparse) the memory usage at the >> GPU grows with every iteration of the infinite loop. The only solution that >> I found is to destroy and create the matrix every time that it needs to be >> updated. Is there a better way to avoid this problem? >> >> I am using Petsc Release Version 3.12.2 with this configure options: >> >> Configure options --package-prefix-hash=/home_nobck/user/petsc-hash-pkgs >> --with-debugging=0 --with-fc=0 CC=gcc CXX=g++ --COPTFLAGS="-g -O3" >> --CXXOPTFLAGS="-g -O3" --CUDAOPTFLAGS="-D_FORCE_INLINES -g -O3" >> --with-mpi-include=/usr/lib/openmpi/include >> --with-mpi-lib="-L/usr/lib/openmpi/lib -lmpi_cxx -lmpi" --with-cuda=1 >> --with-precision=double --with-cuda-include=/usr/include >> --with-cuda-lib="-L/usr/lib/x86_64-linux-gnu -lcuda -lcudart -lcublas >> -lcufft -lcusparse -lcusolver" PETSC_ARCH=arch-ci-linux-opt-cxx-cuda-double >> >> Thanks for your help, >> >> Jorge >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.beare at monash.edu Mon Mar 2 22:45:12 2020 From: richard.beare at monash.edu (Richard Beare) Date: Tue, 3 Mar 2020 15:45:12 +1100 Subject: [petsc-users] Correct approach for updating deprecated code In-Reply-To: References: Message-ID: This is the error. Maybe nothing to do with the viewer part and something to do with changes in initialization? Something that has happened since version 3.6.3, perhaps. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Operation done in wrong order [0]PETSC ERROR: You should call DMSetUp() first [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-753-gbac983c101 GIT Date: 2020-02-18 15:05:54 +0000 [0]PETSC ERROR: simul_atrophy on a named m3j007 by rbeare Tue Mar 3 14:12:22 2020 [0]PETSC ERROR: Configure options --with-cc=gcc-6 --with-cxx=g++-6 --with-fc=gfortran --download-mpich --download-fblaslapack --with-clangu age=cxx --prefix=/opt/petsc/ --with-64-bit-indices=yes [0]PETSC ERROR: #1 DMDASetFieldName() line 68 in /petsc/src/dm/impls/da/dacorn.c [0]PETSC ERROR: #2 PetscAdLemTaras3D() line 68 in /simul-atrophy/src/includes/PetscAdLemTaras3D.hxx terminate called after throwing an instance of 'std::runtime_error' what(): Error detected in C PETSc SIGABRT: abort PC=0x47282b m=0 sigcode=0 goroutine 1 [running, locked to thread]: syscall.RawSyscall(0x3e, 0x5cbd, 0x6, 0x0, 0xc0001e3ef0, 0x48f422, 0x5cbd) /usr/local/go/1.11.1/src/syscall/asm_linux_amd64.s:78 +0x2b fp=0xc0001e3eb8 sp=0xc0001e3eb0 pc=0x47282b syscall.Kill(0x5cbd, 0x6, 0x4377de, 0xc0001e3f20) /usr/local/go/1.11.1/src/syscall/zsyscall_linux_amd64.go:597 +0x4b fp=0xc0001e3f00 sp=0xc0001e3eb8 pc=0x46f1db github.com/sylabs/singularity/internal/app/starter.Master.func4() internal/app/starter/master_linux.go:158 +0x3e fp=0xc0001e3f38 sp=0xc0001e3f00 pc=0x8d51be github.com/sylabs/singularity/internal/pkg/util/mainthread.Execute.func1() internal/pkg/util/mainthread/mainthread.go:20 +0x2f fp=0xc0001e3f60 sp=0xc0001e3f38 pc=0x87472f main.main() cmd/starter/main_linux.go:102 +0x68 fp=0xc0001e3f98 sp=0xc0001e3f60 pc=0x8d59f8 runtime.main() /usr/local/go/1.11.1/src/runtime/proc.go:201 +0x207 fp=0xc0001e3fe0 sp=0xc0001e3f98 pc=0x42faa7 runtime.goexit() /usr/local/go/1.11.1/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc0001e3fe8 sp=0xc0001e3fe0 pc=0x45b4f1 goroutine 5 [syscall]: os/signal.signal_recv(0xaa2620) /usr/local/go/1.11.1/src/runtime/sigqueue.go:139 +0x9c os/signal.loop() /usr/local/go/1.11.1/src/os/signal/signal_unix.go:23 +0x22 created by os/signal.init.0 /usr/local/go/1.11.1/src/os/signal/signal_unix.go:29 +0x41 goroutine 7 [chan receive]: github.com/sylabs/singularity/internal/pkg/util/mainthread.Execute(0xc0003e83a0) internal/pkg/util/mainthread/mainthread.go:23 +0xb4 github.com/sylabs/singularity/internal/app/starter.Master(0x4, 0xa, 0x2300, 0x5cca, 0xc000213e00) internal/app/starter/master_linux.go:157 +0x44e main.startup() cmd/starter/main_linux.go:73 +0x563 created by main.main cmd/starter/main_linux.go:98 +0x3e On Tue, 25 Feb 2020 at 03:04, Matthew Knepley wrote: > On Sun, Feb 23, 2020 at 6:45 PM Richard Beare > wrote: > >> That's what I did (see below), but I got ordering errors (unfortunately >> deleted those logs too soon). I'll rerun if no one recognises what I've >> done wrong. >> >> PetscViewer viewer1; >> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName.c_str >> (),FILE_MODE_WRITE,&viewer1);CHKERRQ(ierr); >> //ierr = >> PetscViewerSetFormat(viewer1,PETSC_VIEWER_BINARY_MATLAB);CHKERRQ(ierr); >> ierr = PetscViewerPushFormat(viewer1,PETSC_VIEWER_BINARY_MATLAB);CHKERRQ >> (ierr); >> > > This should not cause problems. However, is it possible that somewhere you > are pushing a format > again and again without popping? This could exceed the stack size. > > Thanks, > > Matt > > >> ierr = PetscObjectSetName((PetscObject)mX,"x");CHKERRQ(ierr); >> ierr = PetscObjectSetName((PetscObject)mB,"b");CHKERRQ(ierr); >> >> On Mon, 24 Feb 2020 at 10:43, Matthew Knepley wrote: >> >>> On Sun, Feb 23, 2020 at 6:25 PM Richard Beare via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>>> >>>> Hi, >>>> The following code gives a deprecation warning. What is the correct way >>>> of updating the use of ViewerSetFormat to ViewerPushFormat (which I presume >>>> is the preferred replacement). My first attempt gave errors concerning >>>> ordering. >>>> >>> >>> You can't just change SetFormat to PushFormat here? >>> >>> Matt >>> >>> >>>> Thanks >>>> >>>> PetscViewer viewer1; >>>> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName.c_str >>>> (),FILE_MODE_WRITE,&viewer1);CHKERRQ(ierr); >>>> ierr = PetscViewerSetFormat(viewer1,PETSC_VIEWER_BINARY_MATLAB);CHKERRQ >>>> (ierr); >>>> >>>> ierr = PetscObjectSetName((PetscObject)mX,"x");CHKERRQ(ierr); >>>> ierr = PetscObjectSetName((PetscObject)mB,"b");CHKERRQ(ierr); >>>> >>>> ierr = VecView(mX,viewer1);CHKERRQ(ierr); >>>> ierr = VecView(mB,viewer1);CHKERRQ(ierr); >>>> >>>> >>>> -- >>>> -- >>>> A/Prof Richard Beare >>>> Imaging and Bioinformatics, Peninsula Clinical School >>>> orcid.org/0000-0002-7530-5664 >>>> Richard.Beare at monash.edu >>>> +61 3 9788 1724 >>>> >>>> >>>> >>>> Geospatial Research: >>>> https://www.monash.edu/medicine/scs/medicine/research/geospatial-analysis >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> >> >> -- >> -- >> A/Prof Richard Beare >> Imaging and Bioinformatics, Peninsula Clinical School >> orcid.org/0000-0002-7530-5664 >> Richard.Beare at monash.edu >> +61 3 9788 1724 >> >> >> >> Geospatial Research: >> https://www.monash.edu/medicine/scs/medicine/research/geospatial-analysis >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- -- A/Prof Richard Beare Imaging and Bioinformatics, Peninsula Clinical School orcid.org/0000-0002-7530-5664 Richard.Beare at monash.edu +61 3 9788 1724 Geospatial Research: https://www.monash.edu/medicine/scs/medicine/research/geospatial-analysis -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Tue Mar 3 02:46:00 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Tue, 3 Mar 2020 09:46:00 +0100 Subject: [petsc-users] Correct approach for updating deprecated code In-Reply-To: References: Message-ID: There was a relevant change in PETSc 3.8, you now need to call DMSetUp() after DMDACreate1d(), DMDACreate2d(), or DMDACreate3d(). https://www.mcs.anl.gov/petsc/documentation/changes/38.html "Replace calls to DMDACreateXd() with DMDACreateXd(), [DMSetFromOptions()] DMSetUp()" Am Di., 3. M?rz 2020 um 05:46 Uhr schrieb Richard Beare via petsc-users < petsc-users at mcs.anl.gov>: > This is the error. Maybe nothing to do with the viewer part and something > to do with changes in initialization? Something that has happened since > version 3.6.3, perhaps. > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Operation done in wrong order > [0]PETSC ERROR: You should call DMSetUp() first > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-753-gbac983c101 > GIT Date: 2020-02-18 15:05:54 +0000 > [0]PETSC ERROR: simul_atrophy on a named m3j007 by rbeare Tue Mar 3 > 14:12:22 2020 > [0]PETSC ERROR: Configure options --with-cc=gcc-6 --with-cxx=g++-6 > --with-fc=gfortran --download-mpich --download-fblaslapack --with-clangu > age=cxx --prefix=/opt/petsc/ --with-64-bit-indices=yes > [0]PETSC ERROR: #1 DMDASetFieldName() line 68 in > /petsc/src/dm/impls/da/dacorn.c > [0]PETSC ERROR: #2 PetscAdLemTaras3D() line 68 in > /simul-atrophy/src/includes/PetscAdLemTaras3D.hxx > terminate called after throwing an instance of 'std::runtime_error' > what(): Error detected in C PETSc > SIGABRT: abort > PC=0x47282b m=0 sigcode=0 > > goroutine 1 [running, locked to thread]: > syscall.RawSyscall(0x3e, 0x5cbd, 0x6, 0x0, 0xc0001e3ef0, 0x48f422, 0x5cbd) > /usr/local/go/1.11.1/src/syscall/asm_linux_amd64.s:78 +0x2b > fp=0xc0001e3eb8 sp=0xc0001e3eb0 pc=0x47282b > syscall.Kill(0x5cbd, 0x6, 0x4377de, 0xc0001e3f20) > /usr/local/go/1.11.1/src/syscall/zsyscall_linux_amd64.go:597 +0x4b > fp=0xc0001e3f00 sp=0xc0001e3eb8 pc=0x46f1db > github.com/sylabs/singularity/internal/app/starter.Master.func4() > internal/app/starter/master_linux.go:158 +0x3e fp=0xc0001e3f38 > sp=0xc0001e3f00 pc=0x8d51be > github.com/sylabs/singularity/internal/pkg/util/mainthread.Execute.func1() > internal/pkg/util/mainthread/mainthread.go:20 +0x2f > fp=0xc0001e3f60 sp=0xc0001e3f38 pc=0x87472f > main.main() > cmd/starter/main_linux.go:102 +0x68 fp=0xc0001e3f98 > sp=0xc0001e3f60 pc=0x8d59f8 > runtime.main() > /usr/local/go/1.11.1/src/runtime/proc.go:201 +0x207 > fp=0xc0001e3fe0 sp=0xc0001e3f98 pc=0x42faa7 > runtime.goexit() > /usr/local/go/1.11.1/src/runtime/asm_amd64.s:1333 +0x1 > fp=0xc0001e3fe8 sp=0xc0001e3fe0 pc=0x45b4f1 > > goroutine 5 [syscall]: > os/signal.signal_recv(0xaa2620) > /usr/local/go/1.11.1/src/runtime/sigqueue.go:139 +0x9c > os/signal.loop() > /usr/local/go/1.11.1/src/os/signal/signal_unix.go:23 +0x22 > created by os/signal.init.0 > /usr/local/go/1.11.1/src/os/signal/signal_unix.go:29 +0x41 > > goroutine 7 [chan receive]: > > github.com/sylabs/singularity/internal/pkg/util/mainthread.Execute(0xc0003e83a0) > internal/pkg/util/mainthread/mainthread.go:23 +0xb4 > github.com/sylabs/singularity/internal/app/starter.Master(0x4, 0xa, > 0x2300, 0x5cca, 0xc000213e00) > internal/app/starter/master_linux.go:157 +0x44e > main.startup() > cmd/starter/main_linux.go:73 +0x563 > created by main.main > cmd/starter/main_linux.go:98 +0x3e > > On Tue, 25 Feb 2020 at 03:04, Matthew Knepley wrote: > >> On Sun, Feb 23, 2020 at 6:45 PM Richard Beare >> wrote: >> >>> That's what I did (see below), but I got ordering errors (unfortunately >>> deleted those logs too soon). I'll rerun if no one recognises what I've >>> done wrong. >>> >>> PetscViewer viewer1; >>> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName.c_str >>> (),FILE_MODE_WRITE,&viewer1);CHKERRQ(ierr); >>> //ierr = >>> PetscViewerSetFormat(viewer1,PETSC_VIEWER_BINARY_MATLAB);CHKERRQ(ierr); >>> ierr = PetscViewerPushFormat(viewer1,PETSC_VIEWER_BINARY_MATLAB);CHKERRQ >>> (ierr); >>> >> >> This should not cause problems. However, is it possible that somewhere >> you are pushing a format >> again and again without popping? This could exceed the stack size. >> >> Thanks, >> >> Matt >> >> >>> ierr = PetscObjectSetName((PetscObject)mX,"x");CHKERRQ(ierr); >>> ierr = PetscObjectSetName((PetscObject)mB,"b");CHKERRQ(ierr); >>> >>> On Mon, 24 Feb 2020 at 10:43, Matthew Knepley wrote: >>> >>>> On Sun, Feb 23, 2020 at 6:25 PM Richard Beare via petsc-users < >>>> petsc-users at mcs.anl.gov> wrote: >>>> >>>>> >>>>> Hi, >>>>> The following code gives a deprecation warning. What is the correct >>>>> way of updating the use of ViewerSetFormat to ViewerPushFormat (which I >>>>> presume is the preferred replacement). My first attempt gave errors >>>>> concerning ordering. >>>>> >>>> >>>> You can't just change SetFormat to PushFormat here? >>>> >>>> Matt >>>> >>>> >>>>> Thanks >>>>> >>>>> PetscViewer viewer1; >>>>> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName.c_str >>>>> (),FILE_MODE_WRITE,&viewer1);CHKERRQ(ierr); >>>>> ierr = PetscViewerSetFormat(viewer1,PETSC_VIEWER_BINARY_MATLAB); >>>>> CHKERRQ(ierr); >>>>> >>>>> ierr = PetscObjectSetName((PetscObject)mX,"x");CHKERRQ(ierr); >>>>> ierr = PetscObjectSetName((PetscObject)mB,"b");CHKERRQ(ierr); >>>>> >>>>> ierr = VecView(mX,viewer1);CHKERRQ(ierr); >>>>> ierr = VecView(mB,viewer1);CHKERRQ(ierr); >>>>> >>>>> >>>>> -- >>>>> -- >>>>> A/Prof Richard Beare >>>>> Imaging and Bioinformatics, Peninsula Clinical School >>>>> orcid.org/0000-0002-7530-5664 >>>>> Richard.Beare at monash.edu >>>>> +61 3 9788 1724 >>>>> >>>>> >>>>> >>>>> Geospatial Research: >>>>> https://www.monash.edu/medicine/scs/medicine/research/geospatial-analysis >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >>> >>> -- >>> -- >>> A/Prof Richard Beare >>> Imaging and Bioinformatics, Peninsula Clinical School >>> orcid.org/0000-0002-7530-5664 >>> Richard.Beare at monash.edu >>> +61 3 9788 1724 >>> >>> >>> >>> Geospatial Research: >>> https://www.monash.edu/medicine/scs/medicine/research/geospatial-analysis >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > > -- > -- > A/Prof Richard Beare > Imaging and Bioinformatics, Peninsula Clinical School > orcid.org/0000-0002-7530-5664 > Richard.Beare at monash.edu > +61 3 9788 1724 > > > > Geospatial Research: > https://www.monash.edu/medicine/scs/medicine/research/geospatial-analysis > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordic at cttc.upc.edu Tue Mar 3 04:50:11 2020 From: jordic at cttc.upc.edu (jordic) Date: Tue, 03 Mar 2020 11:50:11 +0100 Subject: [petsc-users] Memory leak at GPU when updating matrix of type mpiaijcusparse (CUDA) In-Reply-To: References: Message-ID: <2c5ec28019955403c431b8d557864039@cttc.upc.edu> Hello Junchao, Thank you very much for solving these problems so quickly! Best regards, Jorge On 2020-03-03 00:35, Junchao Zhang wrote: > Jorge, > I found multiple problems in petsc with your example. I have a fix at https://gitlab.com/petsc/petsc/-/merge_requests/2575 > If everything goes well, it will be in maint and master in a few days. > Thanks for reporting the problem. > > --Junchao Zhang > > On Fri, Feb 28, 2020 at 12:13 PM Junchao Zhang wrote: > I will take a look at it and get back to you. Thanks. > > On Fri, Feb 28, 2020, 7:29 AM jordic wrote: > > Dear all, > > the following simple program: > > ////////////////////////////////////////////////////////////////////////////////////// > > #include > > PetscInt ierr=0; > int main(int argc,char **argv) > { > MPI_Comm comm; > PetscMPIInt rank,size; > > PetscInitialize(&argc,&argv,NULL,help);if (ierr) return ierr; > comm = PETSC_COMM_WORLD; > MPI_Comm_rank(comm,&rank); > MPI_Comm_size(comm,&size); > > Mat A; > MatCreate(comm, &A); > MatSetSizes(A, 1, 1, PETSC_DETERMINE, PETSC_DETERMINE); > MatSetFromOptions(A); > PetscInt dnz=1, onz=0; > MatMPIAIJSetPreallocation(A, 0, &dnz, 0, &onz); > MatSetOption(A, MAT_NO_OFF_PROC_ENTRIES, PETSC_TRUE); > MatSetOption(A, MAT_IGNORE_ZERO_ENTRIES, PETSC_TRUE); > PetscInt igid=rank, jgid=rank; > PetscScalar value=rank+1.0; > > // for(int i=0; i<10; ++i) > for(;;) //infinite loop > { > MatSetValue(A, igid, jgid, value, INSERT_VALUES); > MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); > } > MatDestroy(&A); > PetscFinalize(); > return ierr; > } > > ////////////////////////////////////////////////////////////////////////////////////// > > creates a simple diagonal matrix with one value per mpi-core. If the type of the matrix is "mpiaij" (-mat_type mpiaij) there is no problem but with "mpiaijcusparse" (-mat_type mpiaijcusparse) the memory usage at the GPU grows with every iteration of the infinite loop. The only solution that I found is to destroy and create the matrix every time that it needs to be updated. Is there a better way to avoid this problem? > > I am using Petsc Release Version 3.12.2 with this configure options: > > Configure options --package-prefix-hash=/home_nobck/user/petsc-hash-pkgs --with-debugging=0 --with-fc=0 CC=gcc CXX=g++ --COPTFLAGS="-g -O3" --CXXOPTFLAGS="-g -O3" --CUDAOPTFLAGS="-D_FORCE_INLINES -g -O3" --with-mpi-include=/usr/lib/openmpi/include --with-mpi-lib="-L/usr/lib/openmpi/lib -lmpi_cxx -lmpi" --with-cuda=1 --with-precision=double --with-cuda-include=/usr/include --with-cuda-lib="-L/usr/lib/x86_64-linux-gnu -lcuda -lcudart -lcublas -lcufft -lcusparse -lcusolver" PETSC_ARCH=arch-ci-linux-opt-cxx-cuda-double > > Thanks for your help, > > Jorge -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Tue Mar 3 22:00:43 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Tue, 3 Mar 2020 20:00:43 -0800 Subject: [petsc-users] Inquiry about the preconditioner setup of KSP. Message-ID: I am trying to use Julia to call Petsc. 1) First, I run the built-in example ex2.c https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex2.c.html For this case, I tried KSPGMRES , initial zero solution and different PC (LU, ILU, ICC, JACOBI). And they work as expected. 75: KSPSetType(ksp,KSPGMRES); 76: KSPSetInitialGuessNonzero(ksp,PETSC_FALSE); 87: KSPGetPC(ksp,&pc); 88: PCSetType(pc,PCICC); 2) Second, I tried to call KSP from Julia using the same matrix and right hand side as ex2.c . A wrapper has been written to call Petsc code from Julia. After I transfer matrix to from Julia to Petsc, I checked the matrix and preconditioner matrix in the context of Petsc. These two matrices are right. For no preconditioner,Jacobi precontioner and LU, the residual for Julia version is same as that of original Petsc one for every iteration. However, for ILU preconditioner, the residual for Julia is alwaysthe same as the LU one. This is not expected. For both Julia and original Petsc version, I checked the PC type inside the subroutine PCSetType and the types are correct, namely ilu.I am trying to dig into the source code to check how the preconditioner is interacting with GMRES. Where is the subroutine for the GMRES solver, namely, ierr=(*KSP->ops->solve)(ksp); Or do you have any suggestions? Thanks, Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Mar 3 22:21:11 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 03 Mar 2020 21:21:11 -0700 Subject: [petsc-users] Inquiry about the preconditioner setup of KSP. In-Reply-To: References: Message-ID: <87k1404uo8.fsf@jedbrown.org> Is it possible that Julia is transferring the matrix to PETSc as a dense matrix (storing the nonzeros) instead of preserving sparsity? If you store the zeros, then ILU will be allowed to fill those entries, thereby becoming LU. Xiaodong Liu writes: > I am trying to use Julia to call Petsc. > 1) First, I run the built-in example ex2.c > https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex2.c.html > > For this case, I tried KSPGMRES , initial zero solution and different PC > (LU, ILU, ICC, JACOBI). And they work as expected. > > 75: KSPSetType(ksp,KSPGMRES); > 76: KSPSetInitialGuessNonzero(ksp,PETSC_FALSE); > 87: KSPGetPC(ksp,&pc); > 88: PCSetType(pc,PCICC); > > 2) Second, I tried to call KSP from Julia using the same matrix and right > hand side as ex2.c . A wrapper has been written to call Petsc code from > Julia. After I transfer matrix to from Julia to Petsc, I checked the matrix > and preconditioner matrix in the context of Petsc. These two matrices are > right. For no preconditioner,Jacobi precontioner and LU, the residual for > Julia version is same as that of original Petsc one for every iteration. > However, for ILU preconditioner, the residual for Julia is alwaysthe same > as the LU one. This is not expected. For both Julia and original Petsc > version, I checked the PC type inside the subroutine PCSetType and the > types are correct, namely ilu.I am trying to dig into the source code to > check how the preconditioner is interacting with GMRES. > > Where is the subroutine for the GMRES solver, namely, > ierr=(*KSP->ops->solve)(ksp); > > Or do you have any suggestions? > > Thanks, > > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 From xliu29 at ncsu.edu Tue Mar 3 22:26:56 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Tue, 3 Mar 2020 20:26:56 -0800 Subject: [petsc-users] Inquiry about the preconditioner setup of KSP. In-Reply-To: <87k1404uo8.fsf@jedbrown.org> References: <87k1404uo8.fsf@jedbrown.org> Message-ID: Thanks a lot. I am transferring a dense matrix from Julia to Petsc. I will check this. In addition, could you please show me where is Where is the subroutine for the GMRES solver, namely, ierr=(*KSP->ops->solve)(ksp); Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Tue, Mar 3, 2020 at 8:21 PM Jed Brown wrote: > Is it possible that Julia is transferring the matrix to PETSc as a dense > matrix (storing the nonzeros) instead of preserving sparsity? If you > store the zeros, then ILU will be allowed to fill those entries, thereby > becoming LU. > > Xiaodong Liu writes: > > > I am trying to use Julia to call Petsc. > > 1) First, I run the built-in example ex2.c > > > https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex2.c.html > > > > For this case, I tried KSPGMRES , initial zero solution and different PC > > (LU, ILU, ICC, JACOBI). And they work as expected. > > > > 75: KSPSetType(ksp,KSPGMRES); > > 76: KSPSetInitialGuessNonzero(ksp,PETSC_FALSE); > > 87: KSPGetPC(ksp,&pc); > > 88: PCSetType(pc,PCICC); > > > > 2) Second, I tried to call KSP from Julia using the same matrix and right > > hand side as ex2.c . A wrapper has been written to call Petsc code from > > Julia. After I transfer matrix to from Julia to Petsc, I checked the > matrix > > and preconditioner matrix in the context of Petsc. These two matrices are > > right. For no preconditioner,Jacobi precontioner and LU, the residual for > > Julia version is same as that of original Petsc one for every iteration. > > However, for ILU preconditioner, the residual for Julia is alwaysthe same > > as the LU one. This is not expected. For both Julia and original Petsc > > version, I checked the PC type inside the subroutine PCSetType and the > > types are correct, namely ilu.I am trying to dig into the source code to > > check how the preconditioner is interacting with GMRES. > > > > Where is the subroutine for the GMRES solver, namely, > > ierr=(*KSP->ops->solve)(ksp); > > > > Or do you have any suggestions? > > > > Thanks, > > > > Xiaodong Liu, PhD > > X: Computational Physics Division > > Los Alamos National Laboratory > > P.O. Box 1663, > > Los Alamos, NM 87544 > > 505-709-0534 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Mar 3 22:46:48 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 03 Mar 2020 21:46:48 -0700 Subject: [petsc-users] Inquiry about the preconditioner setup of KSP. In-Reply-To: References: <87k1404uo8.fsf@jedbrown.org> Message-ID: <87h7z44thj.fsf@jedbrown.org> Xiaodong Liu writes: > Thanks a lot. I am transferring a dense matrix from Julia to Petsc. > I will check this. > In addition, could you please show me where is > > Where is the subroutine for the GMRES solver, namely, > ierr=(*KSP->ops->solve)(ksp); $ git grep KSPSolve_GMRES src/docs/tex/manual/developers.tex:\item Names of implementations of class functions should begin with the function name, an underscore, and the name of the implementation, for example, \lstinline{KSPSolve_GMRES()}. src/ksp/ksp/impls/gmres/gmres.c:PetscErrorCode KSPSolve_GMRES(KSP ksp) src/ksp/ksp/impls/gmres/gmres.c: ksp->ops->solve = KSPSolve_GMRES; > > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > > > On Tue, Mar 3, 2020 at 8:21 PM Jed Brown wrote: > >> Is it possible that Julia is transferring the matrix to PETSc as a dense >> matrix (storing the nonzeros) instead of preserving sparsity? If you >> store the zeros, then ILU will be allowed to fill those entries, thereby >> becoming LU. >> >> Xiaodong Liu writes: >> >> > I am trying to use Julia to call Petsc. >> > 1) First, I run the built-in example ex2.c >> > >> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex2.c.html >> > >> > For this case, I tried KSPGMRES , initial zero solution and different PC >> > (LU, ILU, ICC, JACOBI). And they work as expected. >> > >> > 75: KSPSetType(ksp,KSPGMRES); >> > 76: KSPSetInitialGuessNonzero(ksp,PETSC_FALSE); >> > 87: KSPGetPC(ksp,&pc); >> > 88: PCSetType(pc,PCICC); >> > >> > 2) Second, I tried to call KSP from Julia using the same matrix and right >> > hand side as ex2.c . A wrapper has been written to call Petsc code from >> > Julia. After I transfer matrix to from Julia to Petsc, I checked the >> matrix >> > and preconditioner matrix in the context of Petsc. These two matrices are >> > right. For no preconditioner,Jacobi precontioner and LU, the residual for >> > Julia version is same as that of original Petsc one for every iteration. >> > However, for ILU preconditioner, the residual for Julia is alwaysthe same >> > as the LU one. This is not expected. For both Julia and original Petsc >> > version, I checked the PC type inside the subroutine PCSetType and the >> > types are correct, namely ilu.I am trying to dig into the source code to >> > check how the preconditioner is interacting with GMRES. >> > >> > Where is the subroutine for the GMRES solver, namely, >> > ierr=(*KSP->ops->solve)(ksp); >> > >> > Or do you have any suggestions? >> > >> > Thanks, >> > >> > Xiaodong Liu, PhD >> > X: Computational Physics Division >> > Los Alamos National Laboratory >> > P.O. Box 1663, >> > Los Alamos, NM 87544 >> > 505-709-0534 >> From xliu29 at ncsu.edu Tue Mar 3 23:11:38 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Tue, 3 Mar 2020 21:11:38 -0800 Subject: [petsc-users] Inquiry about the preconditioner setup of KSP. In-Reply-To: <87h7z44thj.fsf@jedbrown.org> References: <87k1404uo8.fsf@jedbrown.org> <87h7z44thj.fsf@jedbrown.org> Message-ID: Thanks, Jed ! Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Tue, Mar 3, 2020 at 8:46 PM Jed Brown wrote: > Xiaodong Liu writes: > > > Thanks a lot. I am transferring a dense matrix from Julia to Petsc. > > I will check this. > > In addition, could you please show me where is > > > > Where is the subroutine for the GMRES solver, namely, > > ierr=(*KSP->ops->solve)(ksp); > > $ git grep KSPSolve_GMRES > src/docs/tex/manual/developers.tex:\item Names of implementations of class > functions should begin with the function name, an underscore, and the name > of the implementation, for example, \lstinline{KSPSolve_GMRES()}. > src/ksp/ksp/impls/gmres/gmres.c:PetscErrorCode KSPSolve_GMRES(KSP ksp) > src/ksp/ksp/impls/gmres/gmres.c: ksp->ops->solve = > KSPSolve_GMRES; > > > > > > Xiaodong Liu, PhD > > X: Computational Physics Division > > Los Alamos National Laboratory > > P.O. Box 1663, > > Los Alamos, NM 87544 > > 505-709-0534 > > > > > > On Tue, Mar 3, 2020 at 8:21 PM Jed Brown wrote: > > > >> Is it possible that Julia is transferring the matrix to PETSc as a dense > >> matrix (storing the nonzeros) instead of preserving sparsity? If you > >> store the zeros, then ILU will be allowed to fill those entries, thereby > >> becoming LU. > >> > >> Xiaodong Liu writes: > >> > >> > I am trying to use Julia to call Petsc. > >> > 1) First, I run the built-in example ex2.c > >> > > >> > https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex2.c.html > >> > > >> > For this case, I tried KSPGMRES , initial zero solution and > different PC > >> > (LU, ILU, ICC, JACOBI). And they work as expected. > >> > > >> > 75: KSPSetType(ksp,KSPGMRES); > >> > 76: KSPSetInitialGuessNonzero(ksp,PETSC_FALSE); > >> > 87: KSPGetPC(ksp,&pc); > >> > 88: PCSetType(pc,PCICC); > >> > > >> > 2) Second, I tried to call KSP from Julia using the same matrix and > right > >> > hand side as ex2.c . A wrapper has been written to call Petsc code > from > >> > Julia. After I transfer matrix to from Julia to Petsc, I checked the > >> matrix > >> > and preconditioner matrix in the context of Petsc. These two matrices > are > >> > right. For no preconditioner,Jacobi precontioner and LU, the residual > for > >> > Julia version is same as that of original Petsc one for every > iteration. > >> > However, for ILU preconditioner, the residual for Julia is alwaysthe > same > >> > as the LU one. This is not expected. For both Julia and original Petsc > >> > version, I checked the PC type inside the subroutine PCSetType and the > >> > types are correct, namely ilu.I am trying to dig into the source code > to > >> > check how the preconditioner is interacting with GMRES. > >> > > >> > Where is the subroutine for the GMRES solver, namely, > >> > ierr=(*KSP->ops->solve)(ksp); > >> > > >> > Or do you have any suggestions? > >> > > >> > Thanks, > >> > > >> > Xiaodong Liu, PhD > >> > X: Computational Physics Division > >> > Los Alamos National Laboratory > >> > P.O. Box 1663, > >> > Los Alamos, NM 87544 > >> > 505-709-0534 > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 4 03:53:07 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 4 Mar 2020 04:53:07 -0500 Subject: [petsc-users] Inquiry about the preconditioner setup of KSP. In-Reply-To: <87h7z44thj.fsf@jedbrown.org> References: <87k1404uo8.fsf@jedbrown.org> <87h7z44thj.fsf@jedbrown.org> Message-ID: On Tue, Mar 3, 2020 at 11:46 PM Jed Brown wrote: > Xiaodong Liu writes: > > > Thanks a lot. I am transferring a dense matrix from Julia to Petsc. > > I will check this. > > In addition, could you please show me where is > > > > Where is the subroutine for the GMRES solver, namely, > > ierr=(*KSP->ops->solve)(ksp); > > $ git grep KSPSolve_GMRES > src/docs/tex/manual/developers.tex:\item Names of implementations of class > functions should begin with the function name, an underscore, and the name > of the implementation, for example, \lstinline{KSPSolve_GMRES()}. > src/ksp/ksp/impls/gmres/gmres.c:PetscErrorCode KSPSolve_GMRES(KSP ksp) > src/ksp/ksp/impls/gmres/gmres.c: ksp->ops->solve = > KSPSolve_GMRES; > > Note that these implementation links are also at the bottom of the interface manpages: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSolve.html Thanks, Matt > > > > Xiaodong Liu, PhD > > X: Computational Physics Division > > Los Alamos National Laboratory > > P.O. Box 1663, > > Los Alamos, NM 87544 > > 505-709-0534 > > > > > > On Tue, Mar 3, 2020 at 8:21 PM Jed Brown wrote: > > > >> Is it possible that Julia is transferring the matrix to PETSc as a dense > >> matrix (storing the nonzeros) instead of preserving sparsity? If you > >> store the zeros, then ILU will be allowed to fill those entries, thereby > >> becoming LU. > >> > >> Xiaodong Liu writes: > >> > >> > I am trying to use Julia to call Petsc. > >> > 1) First, I run the built-in example ex2.c > >> > > >> > https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex2.c.html > >> > > >> > For this case, I tried KSPGMRES , initial zero solution and > different PC > >> > (LU, ILU, ICC, JACOBI). And they work as expected. > >> > > >> > 75: KSPSetType(ksp,KSPGMRES); > >> > 76: KSPSetInitialGuessNonzero(ksp,PETSC_FALSE); > >> > 87: KSPGetPC(ksp,&pc); > >> > 88: PCSetType(pc,PCICC); > >> > > >> > 2) Second, I tried to call KSP from Julia using the same matrix and > right > >> > hand side as ex2.c . A wrapper has been written to call Petsc code > from > >> > Julia. After I transfer matrix to from Julia to Petsc, I checked the > >> matrix > >> > and preconditioner matrix in the context of Petsc. These two matrices > are > >> > right. For no preconditioner,Jacobi precontioner and LU, the residual > for > >> > Julia version is same as that of original Petsc one for every > iteration. > >> > However, for ILU preconditioner, the residual for Julia is alwaysthe > same > >> > as the LU one. This is not expected. For both Julia and original Petsc > >> > version, I checked the PC type inside the subroutine PCSetType and the > >> > types are correct, namely ilu.I am trying to dig into the source code > to > >> > check how the preconditioner is interacting with GMRES. > >> > > >> > Where is the subroutine for the GMRES solver, namely, > >> > ierr=(*KSP->ops->solve)(ksp); > >> > > >> > Or do you have any suggestions? > >> > > >> > Thanks, > >> > > >> > Xiaodong Liu, PhD > >> > X: Computational Physics Division > >> > Los Alamos National Laboratory > >> > P.O. Box 1663, > >> > Los Alamos, NM 87544 > >> > 505-709-0534 > >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Wed Mar 4 10:42:45 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Wed, 4 Mar 2020 09:42:45 -0700 Subject: [petsc-users] Inquiry about the preconditioner setup of KSP. In-Reply-To: References: <87k1404uo8.fsf@jedbrown.org> <87h7z44thj.fsf@jedbrown.org> Message-ID: Thanks a lot, Matthew. Very helpful. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Wed, Mar 4, 2020 at 2:53 AM Matthew Knepley wrote: > On Tue, Mar 3, 2020 at 11:46 PM Jed Brown wrote: > >> Xiaodong Liu writes: >> >> > Thanks a lot. I am transferring a dense matrix from Julia to Petsc. >> > I will check this. >> > In addition, could you please show me where is >> > >> > Where is the subroutine for the GMRES solver, namely, >> > ierr=(*KSP->ops->solve)(ksp); >> >> $ git grep KSPSolve_GMRES >> src/docs/tex/manual/developers.tex:\item Names of implementations of >> class functions should begin with the function name, an underscore, and the >> name of the implementation, for example, \lstinline{KSPSolve_GMRES()}. >> src/ksp/ksp/impls/gmres/gmres.c:PetscErrorCode KSPSolve_GMRES(KSP ksp) >> src/ksp/ksp/impls/gmres/gmres.c: ksp->ops->solve >> = KSPSolve_GMRES; >> >> > Note that these implementation links are also at the bottom of the > interface manpages: > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSolve.html > > Thanks, > > Matt > > >> > >> > Xiaodong Liu, PhD >> > X: Computational Physics Division >> > Los Alamos National Laboratory >> > P.O. Box 1663, >> > Los Alamos, NM 87544 >> > 505-709-0534 >> > >> > >> > On Tue, Mar 3, 2020 at 8:21 PM Jed Brown wrote: >> > >> >> Is it possible that Julia is transferring the matrix to PETSc as a >> dense >> >> matrix (storing the nonzeros) instead of preserving sparsity? If you >> >> store the zeros, then ILU will be allowed to fill those entries, >> thereby >> >> becoming LU. >> >> >> >> Xiaodong Liu writes: >> >> >> >> > I am trying to use Julia to call Petsc. >> >> > 1) First, I run the built-in example ex2.c >> >> > >> >> >> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex2.c.html >> >> > >> >> > For this case, I tried KSPGMRES , initial zero solution and >> different PC >> >> > (LU, ILU, ICC, JACOBI). And they work as expected. >> >> > >> >> > 75: KSPSetType(ksp,KSPGMRES); >> >> > 76: KSPSetInitialGuessNonzero(ksp,PETSC_FALSE); >> >> > 87: KSPGetPC(ksp,&pc); >> >> > 88: PCSetType(pc,PCICC); >> >> > >> >> > 2) Second, I tried to call KSP from Julia using the same matrix and >> right >> >> > hand side as ex2.c . A wrapper has been written to call Petsc code >> from >> >> > Julia. After I transfer matrix to from Julia to Petsc, I checked the >> >> matrix >> >> > and preconditioner matrix in the context of Petsc. These two >> matrices are >> >> > right. For no preconditioner,Jacobi precontioner and LU, the >> residual for >> >> > Julia version is same as that of original Petsc one for every >> iteration. >> >> > However, for ILU preconditioner, the residual for Julia is alwaysthe >> same >> >> > as the LU one. This is not expected. For both Julia and original >> Petsc >> >> > version, I checked the PC type inside the subroutine PCSetType and >> the >> >> > types are correct, namely ilu.I am trying to dig into the source >> code to >> >> > check how the preconditioner is interacting with GMRES. >> >> > >> >> > Where is the subroutine for the GMRES solver, namely, >> >> > ierr=(*KSP->ops->solve)(ksp); >> >> > >> >> > Or do you have any suggestions? >> >> > >> >> > Thanks, >> >> > >> >> > Xiaodong Liu, PhD >> >> > X: Computational Physics Division >> >> > Los Alamos National Laboratory >> >> > P.O. Box 1663, >> >> > Los Alamos, NM 87544 >> >> > 505-709-0534 >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Mar 4 10:48:02 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 4 Mar 2020 11:48:02 -0500 Subject: [petsc-users] -ts_adapt_clip .5,1.25 Message-ID: I use -ts_adapt_clip .5,1.25, with -ts_type arkimex -ts_arkimex_type 1bee. When SNES hits max_it (in the first stage), TS seems to reduce the time step by 4x. I don't see where that 4x comes from and how to adjust it. Is it in the "safety" stuff? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Mar 4 15:03:29 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 04 Mar 2020 14:03:29 -0700 Subject: [petsc-users] -ts_adapt_clip .5,1.25 In-Reply-To: References: Message-ID: <87mu8v3k9q.fsf@jedbrown.org> See -ts_adapt_scale_solve_failed. The clip provides bounds on step shortening as a result of the error indicator, but a solve failing can be more severe. The default clip allows more aggressive shortening, but I don't think we want clip to be a lower bound on solver-failure shortening. And if you shorten and still fail, you want to be able to shorten more. Mark Adams writes: > I use -ts_adapt_clip .5,1.25, with -ts_type arkimex -ts_arkimex_type 1bee. > When SNES hits max_it (in the first stage), TS seems to reduce the time > step by 4x. I don't see where that 4x comes from and how to adjust it. Is > it in the "safety" stuff? > Thanks, > Mark From mfadams at lbl.gov Wed Mar 4 19:00:59 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 4 Mar 2020 20:00:59 -0500 Subject: [petsc-users] -ts_adapt_clip .5,1.25 In-Reply-To: <87mu8v3k9q.fsf@jedbrown.org> References: <87mu8v3k9q.fsf@jedbrown.org> Message-ID: On Wed, Mar 4, 2020 at 4:03 PM Jed Brown wrote: > See -ts_adapt_scale_solve_failed. ' That's it. No C interface for this so no doc. > The clip provides bounds on step > shortening as a result of the error indicator, but a solve failing can > be more severe. The default clip allows more aggressive shortening, but > I don't think we want clip to be a lower bound on solver-failure > shortening. And if you shorten and still fail, you want to be able to > shorten more. > Not exactly sure what you are getting at here. I don't have a problem with the default, I just couldn't find where to change it. My problem gets smoother and is pretty simple and well behaved, so I don't need such an aggressive response, I think. I don't have a problem with the default, it's just that I'm trying to squeeze out some performance and I don't want it to fall back so much. Thanks, Mark that the "clip" increase increment that I set (eg, 1.25), when aoos > > Mark Adams writes: > > > I use -ts_adapt_clip .5,1.25, with -ts_type arkimex -ts_arkimex_type > 1bee. > > When SNES hits max_it (in the first stage), TS seems to reduce the time > > step by 4x. I don't see where that 4x comes from and how to adjust it. Is > > it in the "safety" stuff? > > Thanks, > > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Mar 4 19:14:10 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 04 Mar 2020 18:14:10 -0700 Subject: [petsc-users] -ts_adapt_clip .5,1.25 In-Reply-To: References: <87mu8v3k9q.fsf@jedbrown.org> Message-ID: <87wo7z1u3h.fsf@jedbrown.org> Mark Adams writes: > On Wed, Mar 4, 2020 at 4:03 PM Jed Brown wrote: > >> See -ts_adapt_scale_solve_failed. ' > > > That's it. No C interface for this so no doc. Sorry, too much boilerplate makes us sloppy. It'd be great if you could add an interface. From mfadams at lbl.gov Thu Mar 5 06:16:19 2020 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 5 Mar 2020 07:16:19 -0500 Subject: [petsc-users] OS X Catalina Message-ID: I updated to OS X Catalina and I get this error. I opened Xcode and in updated some stuff and I reinstalled mpich with homebrew. Any ideas? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 106365 bytes Desc: not available URL: From flw at rzg.mpg.de Thu Mar 5 06:22:21 2020 From: flw at rzg.mpg.de (flw at rzg.mpg.de) Date: Thu, 05 Mar 2020 13:22:21 +0100 Subject: [petsc-users] Product of nonsymmetric matrices is symmetric Message-ID: <20200305132221.Horde.DXKRGaTTeUrMMcQYJwAgwgI@webmail.mpcdf.mpg.de> Dear PETSc team, I have a linear system of the form A*x=b, where A=(D-G^T D G). Here, G and D are real NxN matrices, D is diagonal and G is not symmetric. So far, we are using the matmpiaij format for all of the given matrices and create A with the help of matmatmult. However, as is easy to show, the matrix A itself is symmetric, due to the sandwich G^T D G. Therefore, we would like to make use of this fact and use the mpisbaij format fir A instead of mpiaij. Can you tell me how to set up the matrix A in this fashion? Unfortunately, I haven't found anything on that in the archive yet. Best regards, Felix From mfadams at lbl.gov Thu Mar 5 07:32:46 2020 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 5 Mar 2020 08:32:46 -0500 Subject: [petsc-users] p4est configure error Message-ID: I had a user report a problem on Cori and I see it on my Mac. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1477959 bytes Desc: not available URL: From knepley at gmail.com Thu Mar 5 07:34:17 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Mar 2020 08:34:17 -0500 Subject: [petsc-users] OS X Catalina In-Reply-To: References: Message-ID: On Thu, Mar 5, 2020 at 7:17 AM Mark Adams wrote: > I updated to OS X Catalina and I get this error. I opened Xcode and in > updated some stuff and I reinstalled mpich with homebrew. Any ideas? > The -lSystem library is not where it expects. Running Executable WITHOUT threads to time it out Executing: /usr/local/Cellar/mpich/3.3.2/bin/mpif90 -o /var/folders/sw/67cq0mmx43g93vrb5xkf1j7c0000gn/T/petsc-ZdkBf4/config.setCompilers/conftest /var/folders/sw/67cq0mmx43g93vrb5xkf1j7c0000gn/T/petsc-ZdkBf4/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: ld: library not found for -lSystem collect2: error: ld returned 1 exit status Error testing Fortran compiler: Cannot compile/link FC with /usr/local/Cellar/mpich/3.3.2/bin/mpif90. MPI installation /usr/local/Cellar/mpich/3.3.2/bin/mpif90 is likely incorrect. Use --with-mpi-dir to indicate an alternate MPI. Deleting "FC" I am guessing the uninstall was not "unny" enough. I have never found Homebrew an improvement over installing by hand. Thanks, Matt > Thanks, > Mark > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 5 07:37:04 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Mar 2020 08:37:04 -0500 Subject: [petsc-users] p4est configure error In-Reply-To: References: Message-ID: It checks the F77 compiler by trying to compile an LAPACK thing, and cannot find LAPACK configure:4356: mpif77 conftest.f -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -lz -llapack -lblas >&5 ld: library not found for -llapack collect2: error: ld returned 1 exit status configure:4360: $? = 1 configure:4398: result: no configure: failed program was: | program main | | end configure:4403: error: in `/Users/markadams/Codes/petsc/arch-macosx-gnu-g/externalpackages/git.p4est': configure:4405: error: Fortran 77 compiler cannot create executables Is your LAPACK screwed up somehow? Thanks, Matt On Thu, Mar 5, 2020 at 8:33 AM Mark Adams wrote: > I had a user report a problem on Cori and I see it on my Mac. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Mar 5 07:39:20 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 5 Mar 2020 07:39:20 -0600 Subject: [petsc-users] OS X Catalina In-Reply-To: References: Message-ID: On Thu, 5 Mar 2020, Mark Adams wrote: > I updated to OS X Catalina and I get this error. I opened Xcode and in > updated some stuff and I reinstalled mpich with homebrew. Any ideas? Its best to reinstall all homebrew packages. https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2016-January/028105.html Satish > Thanks, > Mark > From balay at mcs.anl.gov Thu Mar 5 07:48:05 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 5 Mar 2020 07:48:05 -0600 Subject: [petsc-users] p4est configure error In-Reply-To: References: Message-ID: you are using --with-fc=0 >>>> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-g -O0" CXXOPTFLAGS="-g -O0" FOPTFLAGS="-g -O0" --with-fc=0 --download-metis=1 --download-parmetis=1 --download-fftw --download-p4est=1 --download-zlib --with-cxx-dialect=C++11 --download-triangle=1 --download-hdf5=1 -with-cuda=0 --with-x=1 --with-debugging=1 PETSC_ARCH=arch-macosx-gnu-g --with-64-bit-indices=0 <<< Lapack is found by petsc configure >>>> Executing: /usr/local/Cellar/mpich/3.3.2/bin/mpicc -o /var/folders/sw/67cq0mmx43g93vrb5xkf1j7c0000gn/T/petsc-bVUXvO/config.libraries/conftest -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g -O0 /var/folders/sw/67cq0mmx43g93vrb5xkf1j7c0000gn/T/petsc-bVUXvO/config.libraries/conftest.o -llapack -lblas -llapack -lblas -lc++ -ldl Defined "HAVE_LIBLAPACK" to "1" <<<<< And p4est is configured with ' --disable-fortran --disable-fc --disable-f77 --disable-f90' options. >>>>>>>>>>> Executing: ./configure --prefix=/Users/markadams/Codes/petsc/arch-macosx-gnu-g MAKE=/usr/bin/make --libdir=/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -g -O0" AR="/usr/bin/ar" ARFLAGS="cr" CXX="/usr/local/Cellar/mpich/3.3.2/bin/mpicxx" CXXFLAGS="-fstack-protector -fno-stack-check -g -O0 -std=c++11" --disable-fortran --disable-fc --disable-f77 --disable-f90 --enable-shared --enable-mpi CPPFLAGS="-I/Users/markadams/Codes/petsc/arch-macosx-gnu-g/include -I/usr/local/Cellar/mpich/3.3.2/include" LIBS="-Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -lz -llapack -lblas" --enable-memalign=16 <<<<<<<<< yet its trying to use mpif77 >>>>> configure:4356: mpif77 conftest.f -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -lz -llapack -lblas >&5 ld: library not found for -llapack <<<<< For now - I'll recommend getting your MPI fixed to include functional mpif90 Satish On Thu, 5 Mar 2020, Matthew Knepley wrote: > It checks the F77 compiler by trying to compile an LAPACK thing, and cannot > find LAPACK > > configure:4356: mpif77 conftest.f > -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib > -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -lz -llapack -lblas >&5 > ld: library not found for -llapack > collect2: error: ld returned 1 exit status > configure:4360: $? = 1 > configure:4398: result: no > configure: failed program was: > | program main > | > | end > configure:4403: error: in > `/Users/markadams/Codes/petsc/arch-macosx-gnu-g/externalpackages/git.p4est': > configure:4405: error: Fortran 77 compiler cannot create executables > > Is your LAPACK screwed up somehow? > > Thanks, > > Matt > > On Thu, Mar 5, 2020 at 8:33 AM Mark Adams wrote: > > > I had a user report a problem on Cori and I see it on my Mac. > > > > > From balay at mcs.anl.gov Thu Mar 5 07:51:54 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 5 Mar 2020 07:51:54 -0600 Subject: [petsc-users] Inertia computation fails Message-ID: The address here should be petsc-users - not petsc-users-bounces balay at sb /home/balay/git-repo/slepc (master=) $ git grep 'Inertia computation fails' src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",sr->int1); src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); src/pep/impls/krylov/stoar/qslice.c: if (!nconv) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",nzshift); src/pep/impls/krylov/stoar/qslice.c: if (zeros) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); So the message is likely coming from slepc Satish ---------- Forwarded message ---------- Date: Thu, 05 Mar 2020 14:43:13 +0100 From: Perceval Desforges To: petsc-users Subject: Inertia computation fails Dear PETSc developpers, I am using SLEPC and MUMPS to calculate the eigenvalues of a real symmetric matrix in an interval. I have come upon a crash and I was unable to find any documentation on the error I got. The error is: Inertia computation fails in 2.19893 Is this a slepc, a Petsc or a mumps problem? Thanks, Perceval From jroman at dsic.upv.es Thu Mar 5 07:58:51 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 5 Mar 2020 14:58:51 +0100 Subject: [petsc-users] Inertia computation fails In-Reply-To: References: Message-ID: <6B26CC19-497E-4F10-B7F7-82AA2EDBEDDB@dsic.upv.es> Are you using the EPSKrylovSchurSetDetectZeros() option? As in this example https://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex25.c.html If so, then the explanation is probably that one of the endpoints of your interval coincides with an eigenvalue. Try with a slightly different interval. Jose > El 5 mar 2020, a las 14:51, Satish Balay via petsc-users escribi?: > > The address here should be petsc-users - not petsc-users-bounces > > balay at sb /home/balay/git-repo/slepc (master=) > $ git grep 'Inertia computation fails' > src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",sr->int1); > src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); > src/pep/impls/krylov/stoar/qslice.c: if (!nconv) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",nzshift); > src/pep/impls/krylov/stoar/qslice.c: if (zeros) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); > > So the message is likely coming from slepc > > Satish > > ---------- Forwarded message ---------- > Date: Thu, 05 Mar 2020 14:43:13 +0100 > From: Perceval Desforges > To: petsc-users > Subject: Inertia computation fails > > Dear PETSc developpers, I am using SLEPC and MUMPS to calculate the eigenvalues of a real > symmetric matrix in an interval. I have come upon a crash and I was > unable to find any documentation on the error I got. The error is: Inertia computation fails in 2.19893 Is this a slepc, a Petsc or a mumps problem? Thanks, Perceval From perceval.desforges at polytechnique.edu Thu Mar 5 08:00:45 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Thu, 05 Mar 2020 15:00:45 +0100 Subject: [petsc-users] Inertia computation fails In-Reply-To: References: Message-ID: <56b06c59d94469804f7cd5877f8eb418@polytechnique.edu> Ah thank you, I suspected I made a mistake, I hadn't received a copy of my email... What is the reason for this failure? Is there anything I can do about it? Thanks, Perceval, > The address here should be petsc-users - not petsc-users-bounces > > balay at sb /home/balay/git-repo/slepc (master=) > $ git grep 'Inertia computation fails' > src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",sr->int1); > src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); > src/pep/impls/krylov/stoar/qslice.c: if (!nconv) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",nzshift); > src/pep/impls/krylov/stoar/qslice.c: if (zeros) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); > > So the message is likely coming from slepc > > Satish > > ---------- Forwarded message ---------- > Date: Thu, 05 Mar 2020 14:43:13 +0100 > From: Perceval Desforges > To: petsc-users > Subject: Inertia computation fails > > Dear PETSc developpers, I am using SLEPC and MUMPS to calculate the eigenvalues of a real > symmetric matrix in an interval. I have come upon a crash and I was > unable to find any documentation on the error I got. > > The error is: Inertia computation fails in 2.19893 Is this a slepc, a Petsc or a mumps problem? > > Thanks, > > Perceval -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Mar 5 08:04:42 2020 From: jed at jedbrown.org (Jed Brown) Date: Thu, 05 Mar 2020 07:04:42 -0700 Subject: [petsc-users] Product of nonsymmetric matrices is symmetric In-Reply-To: <20200305132221.Horde.DXKRGaTTeUrMMcQYJwAgwgI@webmail.mpcdf.mpg.de> References: <20200305132221.Horde.DXKRGaTTeUrMMcQYJwAgwgI@webmail.mpcdf.mpg.de> Message-ID: <8736am28zp.fsf@jedbrown.org> The place for this to go would be MatPtAP() for matrix D of SBAIJ matrix type (thus producing A of type SBAIJ), but we don't have an implementation of that. You can do the product with an AIJ matrix and then MatConvert to SBAIJ. flw at rzg.mpg.de writes: > Dear PETSc team, > I have a linear system of the form > A*x=b, > where > A=(D-G^T D G). > Here, G and D are real NxN matrices, D is diagonal and G is not > symmetric. So far, we are using the matmpiaij format for all of the > given matrices and create A with the help of matmatmult. > > However, as is easy to show, the matrix A itself is symmetric, due > to the sandwich G^T D G. Therefore, we would like to make use of this > fact and use the mpisbaij format fir A instead of mpiaij. Can you tell > me how to set up the matrix A in this fashion? Unfortunately, I > haven't found anything on that in the archive yet. > > > Best regards, > Felix From perceval.desforges at polytechnique.edu Thu Mar 5 08:22:08 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Thu, 05 Mar 2020 15:22:08 +0100 Subject: [petsc-users] Inertia computation fails In-Reply-To: <6B26CC19-497E-4F10-B7F7-82AA2EDBEDDB@dsic.upv.es> References: <6B26CC19-497E-4F10-B7F7-82AA2EDBEDDB@dsic.upv.es> Message-ID: Sorry I hadn't seen that you had responded. Thanks a lot, I'll try that. Regards, Perceval, > Are you using the EPSKrylovSchurSetDetectZeros() option? As in this example https://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex25.c.html > If so, then the explanation is probably that one of the endpoints of your interval coincides with an eigenvalue. Try with a slightly different interval. > > Jose > >> El 5 mar 2020, a las 14:51, Satish Balay via petsc-users escribi?: >> >> The address here should be petsc-users - not petsc-users-bounces >> >> balay at sb /home/balay/git-repo/slepc (master=) >> $ git grep 'Inertia computation fails' >> src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",sr->int1); >> src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); >> src/pep/impls/krylov/stoar/qslice.c: if (!nconv) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",nzshift); >> src/pep/impls/krylov/stoar/qslice.c: if (zeros) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); >> >> So the message is likely coming from slepc >> >> Satish >> >> ---------- Forwarded message ---------- >> Date: Thu, 05 Mar 2020 14:43:13 +0100 >> From: Perceval Desforges >> To: petsc-users >> Subject: Inertia computation fails >> >> Dear PETSc developpers, I am using SLEPC and MUMPS to calculate the eigenvalues of a real >> symmetric matrix in an interval. I have come upon a crash and I was >> unable to find any documentation on the error I got. The error is: Inertia computation fails in 2.19893 Is this a slepc, a Petsc or a mumps problem? Thanks, Perceval -------------- next part -------------- An HTML attachment was scrubbed... URL: From adantra at gmail.com Thu Mar 5 10:12:25 2020 From: adantra at gmail.com (Adolfo Rodriguez) Date: Thu, 5 Mar 2020 10:12:25 -0600 Subject: [petsc-users] Problems with MKL? Message-ID: I am experiencing a very stubborn issue, apparently related to an MKL issue. I am solving a linear system using ksp which works well on windows. I am trying to port this program to Linux now but I have been getting an error coming from the solver (the matrix, right-hand side, and initial solution vectors have been constructed without any issues). However, when trying to compute the norm of any vector I get and error. Running the same program with the debug option on, I get the message shown below. I tried valgrind but did not help. Any suggestions? Regards, Adolfo [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] BLASasum line 259 /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c [0]PETSC ERROR: [0] VecNorm_Seq line 221 /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c [0]PETSC ERROR: [0] VecNorm line 213 /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/interface/rvector.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Signal received [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.12.3, Jan, 03, 2020 [0]PETSC ERROR: Unknown Name on a arch-linux-oxy-dbg named ohylrss0 by rodriad Thu Mar 5 10:08:03 2020 [0]PETSC ERROR: Configure options --with-debugging=yes --with-mpi-dir=/apps/Intel/XEcluster/compilers_and_libraries_2018.0.128/linux/mpi/intel64 COPTFLAGS=-debug CXXOPTFLAGS=-debug FOPTFLAGS=-debug PETSC_ARCH=arch-linux-oxy-dbg [0]PETSC ERROR: #1 User provided function() line 0 in unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Mar 5 10:25:15 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 5 Mar 2020 10:25:15 -0600 Subject: [petsc-users] Problems with MKL? In-Reply-To: References: Message-ID: You can try running the code in a debugger and check on the values before the blas call. Or make a debug build with gcc/gfortran/--download-fblaslapack --download-mpich and try valgrind again. BTW: I don't see MKL in configure options here - so likely system blas/lapack is used.. Satish On Thu, 5 Mar 2020, Adolfo Rodriguez wrote: > I am experiencing a very stubborn issue, apparently related to an MKL > issue. I am solving a linear system using ksp which works well on windows. > I am trying to port this program to Linux now but I have been getting an > error coming from the solver (the matrix, right-hand side, and initial > solution vectors have been constructed without any issues). However, when > trying to compute the norm of any vector I get and error. Running the same > program with the debug option on, I get the message shown below. I tried > valgrind but did not help. Any suggestions? > > Regards, > > Adolfo > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] BLASasum line 259 > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c > [0]PETSC ERROR: [0] VecNorm_Seq line 221 > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c > [0]PETSC ERROR: [0] VecNorm line 213 > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/interface/rvector.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for > trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.12.3, Jan, 03, 2020 > [0]PETSC ERROR: Unknown Name on a arch-linux-oxy-dbg named ohylrss0 by > rodriad Thu Mar 5 10:08:03 2020 > [0]PETSC ERROR: Configure options --with-debugging=yes > --with-mpi-dir=/apps/Intel/XEcluster/compilers_and_libraries_2018.0.128/linux/mpi/intel64 > COPTFLAGS=-debug CXXOPTFLAGS=-debug FOPTFLAGS=-debug > PETSC_ARCH=arch-linux-oxy-dbg > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > From adantra at gmail.com Thu Mar 5 11:23:59 2020 From: adantra at gmail.com (Adolfo Rodriguez) Date: Thu, 5 Mar 2020 11:23:59 -0600 Subject: [petsc-users] Problems with MKL? In-Reply-To: References: Message-ID: Satish, Thanks you for super-fast reply. Unfortunately, I cannot follow your suggestion because the application I am linking PETSc is being compiled with INTEL and gcc will not work, same goes with mpich. However, I am positively sure that the problem is that the original application links to mkl STATICALLY, while petsc links to the dynamic libraries. Also, I would like to compile PETSC statically. My question is: can I link to the statical mkl libraries? I tried --with-blas-lapcak-lib=mkl_xx.a and did not work (I am doing exactly that in windows and works). Thanks for your help, Adolfo On Thu, Mar 5, 2020 at 10:25 AM Satish Balay wrote: > You can try running the code in a debugger and check on the values before > the blas call. > > Or make a debug build with gcc/gfortran/--download-fblaslapack > --download-mpich and try valgrind again. > > BTW: I don't see MKL in configure options here - so likely system > blas/lapack is used.. > > Satish > > On Thu, 5 Mar 2020, Adolfo Rodriguez wrote: > > > I am experiencing a very stubborn issue, apparently related to an MKL > > issue. I am solving a linear system using ksp which works well on > windows. > > I am trying to port this program to Linux now but I have been getting an > > error coming from the solver (the matrix, right-hand side, and initial > > solution vectors have been constructed without any issues). However, when > > trying to compute the norm of any vector I get and error. Running the > same > > program with the debug option on, I get the message shown below. I tried > > valgrind but did not help. Any suggestions? > > > > Regards, > > > > Adolfo > > > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > probably memory access out of range > > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > > [0]PETSC ERROR: or see > > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS X > > to find memory corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- Stack Frames > > ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > > [0]PETSC ERROR: is given. > > [0]PETSC ERROR: [0] BLASasum line 259 > > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c > > [0]PETSC ERROR: [0] VecNorm_Seq line 221 > > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c > > [0]PETSC ERROR: [0] VecNorm line 213 > > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/interface/rvector.c > > [0]PETSC ERROR: --------------------- Error Message > > -------------------------------------------------------------- > > [0]PETSC ERROR: Signal received > > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for > > trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.12.3, Jan, 03, 2020 > > [0]PETSC ERROR: Unknown Name on a arch-linux-oxy-dbg named ohylrss0 by > > rodriad Thu Mar 5 10:08:03 2020 > > [0]PETSC ERROR: Configure options --with-debugging=yes > > > --with-mpi-dir=/apps/Intel/XEcluster/compilers_and_libraries_2018.0.128/linux/mpi/intel64 > > COPTFLAGS=-debug CXXOPTFLAGS=-debug FOPTFLAGS=-debug > > PETSC_ARCH=arch-linux-oxy-dbg > > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Mar 5 11:29:27 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 5 Mar 2020 11:29:27 -0600 Subject: [petsc-users] Problems with MKL? In-Reply-To: References: Message-ID: On Thu, 5 Mar 2020, Adolfo Rodriguez wrote: > Satish, > > Thanks you for super-fast reply. Unfortunately, I cannot follow your > suggestion because the application I am linking PETSc is being compiled > with INTEL and gcc will not work, same goes with mpich. However, I am > positively sure that the problem is that the original application links to > mkl STATICALLY, while petsc links to the dynamic libraries. Also, I would > like to compile PETSC statically. --with-shared-libraries=0 > > My question is: can I link to the statical mkl libraries? I tried > --with-blas-lapcak-lib=mkl_xx.a and did not work (I am doing exactly that > in windows and works). With --with-shared-libraries=0 - this won't matter [as long as MKL is used during configure]. Satish > > Thanks for your help, > > Adolfo > > On Thu, Mar 5, 2020 at 10:25 AM Satish Balay wrote: > > > You can try running the code in a debugger and check on the values before > > the blas call. > > > > Or make a debug build with gcc/gfortran/--download-fblaslapack > > --download-mpich and try valgrind again. > > > > BTW: I don't see MKL in configure options here - so likely system > > blas/lapack is used.. > > > > Satish > > > > On Thu, 5 Mar 2020, Adolfo Rodriguez wrote: > > > > > I am experiencing a very stubborn issue, apparently related to an MKL > > > issue. I am solving a linear system using ksp which works well on > > windows. > > > I am trying to port this program to Linux now but I have been getting an > > > error coming from the solver (the matrix, right-hand side, and initial > > > solution vectors have been constructed without any issues). However, when > > > trying to compute the norm of any vector I get and error. Running the > > same > > > program with the debug option on, I get the message shown below. I tried > > > valgrind but did not help. Any suggestions? > > > > > > Regards, > > > > > > Adolfo > > > > > > [0]PETSC ERROR: > > > ------------------------------------------------------------------------ > > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > > probably memory access out of range > > > [0]PETSC ERROR: Try option -start_in_debugger or > > -on_error_attach_debugger > > > [0]PETSC ERROR: or see > > > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > > OS X > > > to find memory corruption errors > > > [0]PETSC ERROR: likely location of problem given in stack below > > > [0]PETSC ERROR: --------------------- Stack Frames > > > ------------------------------------ > > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > available, > > > [0]PETSC ERROR: INSTEAD the line number of the start of the > > function > > > [0]PETSC ERROR: is given. > > > [0]PETSC ERROR: [0] BLASasum line 259 > > > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c > > > [0]PETSC ERROR: [0] VecNorm_Seq line 221 > > > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/impls/seq/bvec2.c > > > [0]PETSC ERROR: [0] VecNorm line 213 > > > /home/rodriad/CODE/petsc-3.12.3/src/vec/vec/interface/rvector.c > > > [0]PETSC ERROR: --------------------- Error Message > > > -------------------------------------------------------------- > > > [0]PETSC ERROR: Signal received > > > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > > for > > > trouble shooting. > > > [0]PETSC ERROR: Petsc Release Version 3.12.3, Jan, 03, 2020 > > > [0]PETSC ERROR: Unknown Name on a arch-linux-oxy-dbg named ohylrss0 by > > > rodriad Thu Mar 5 10:08:03 2020 > > > [0]PETSC ERROR: Configure options --with-debugging=yes > > > > > --with-mpi-dir=/apps/Intel/XEcluster/compilers_and_libraries_2018.0.128/linux/mpi/intel64 > > > COPTFLAGS=-debug CXXOPTFLAGS=-debug FOPTFLAGS=-debug > > > PETSC_ARCH=arch-linux-oxy-dbg > > > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > > > > > From mfadams at lbl.gov Thu Mar 5 12:34:31 2020 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 5 Mar 2020 13:34:31 -0500 Subject: [petsc-users] OS X Catalina In-Reply-To: References: Message-ID: Thanks Satish. That worked great. Everything is fixed. On Thu, Mar 5, 2020 at 8:39 AM Satish Balay wrote: > On Thu, 5 Mar 2020, Mark Adams wrote: > > > I updated to OS X Catalina and I get this error. I opened Xcode and in > > updated some stuff and I reinstalled mpich with homebrew. Any ideas? > > Its best to reinstall all homebrew packages. > > > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2016-January/028105.html > > Satish > > > > Thanks, > > Mark > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Mar 5 12:53:28 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 5 Mar 2020 12:53:28 -0600 Subject: [petsc-users] OS X Catalina In-Reply-To: References: Message-ID: On Thu, 5 Mar 2020, Matthew Knepley wrote: > I am guessing the uninstall was not "unny" enough. I have never found > Homebrew an improvement over installing by hand. Homebrew has been convenient for me. We've been using this for OSX CI/nightlybuilds for a long time.. [primarily for gfortran] ipro:~ balay$ brew leaves automake ccache cmake gcc git gitlab-runner libtool m4 pkg-config wget Satish From pranayreddy865 at gmail.com Fri Mar 6 11:14:27 2020 From: pranayreddy865 at gmail.com (baikadi pranay) Date: Fri, 6 Mar 2020 10:14:27 -0700 Subject: [petsc-users] PETSc backward compatibility Message-ID: Hello PETSc users, We have a FORTRAN code which uses eigenvalue solver routines. The version of PETSc/SLEPc used is 3.11.1.We would be deploying the code on a cluster which has PETSc/SLEPc version 3.6.4. We were wondering if we need to change any functions or if there is backwards compatibility. Please let me know if you need any further information. Thank you, Pranay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 6 12:14:24 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 6 Mar 2020 13:14:24 -0500 Subject: [petsc-users] PETSc backward compatibility In-Reply-To: References: Message-ID: On Fri, Mar 6, 2020 at 12:16 PM baikadi pranay wrote: > Hello PETSc users, > > We have a FORTRAN code which uses eigenvalue solver routines. The version > of PETSc/SLEPc used is 3.11.1.We would be deploying the code on a cluster > which has PETSc/SLEPc version 3.6.4. We were wondering if we need to change > any functions or if there is backwards compatibility. > > Please let me know if you need any further information. > 3.6 is 5 years old. It would be easier to just install the new version on the cluster. That would take 20min max. However, if you would really like to make it compatible, you can look at the list of changes here: https://www.mcs.anl.gov/petsc/documentation/changes/index.html Thanks, Matt > Thank you, > Pranay. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Sat Mar 7 06:23:33 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Sat, 7 Mar 2020 13:23:33 +0100 Subject: [petsc-users] PETSc backward compatibility In-Reply-To: References: Message-ID: I agree that it's better to upgrade whenever possible, but if for reasons out of your control you find yourself needing to support multiple versions of PETSc, which span API changes, you can use macros that PETSc provides for you, as described here: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PETSC_VERSION.html I found this handy in a similar situation to yours. There was one lingering production cluster that had an old version of PETSc in a very-convenient-for-users module, even though everywhere else we could use the latest. #if PETSC_VERSION_GE(3,9,0) call PCFactorSetUpMatSolverType(pc,ierr);CHKERRQ(ierr); #else call PCFactorSetUpMatSolverPackage(pc,ierr);CHKERRQ(ierr); ! PETSc 3.8 and earlier #endif Am Fr., 6. M?rz 2020 um 19:15 Uhr schrieb Matthew Knepley : > On Fri, Mar 6, 2020 at 12:16 PM baikadi pranay > wrote: > >> Hello PETSc users, >> >> We have a FORTRAN code which uses eigenvalue solver routines. The version >> of PETSc/SLEPc used is 3.11.1.We would be deploying the code on a cluster >> which has PETSc/SLEPc version 3.6.4. We were wondering if we need to change >> any functions or if there is backwards compatibility. >> >> Please let me know if you need any further information. >> > > 3.6 is 5 years old. It would be easier to just install the new version on > the cluster. That would take 20min max. > > However, if you would really like to make it compatible, you can look at > the list of changes here: > https://www.mcs.anl.gov/petsc/documentation/changes/index.html > > Thanks, > > Matt > > >> Thank you, >> Pranay. >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat Mar 7 11:06:35 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 7 Mar 2020 12:06:35 -0500 Subject: [petsc-users] PETSc backward compatibility In-Reply-To: References: Message-ID: It is a fact of life that some projects just don't have the resources, minimal though they may be, to move everyone to a new PETSc and not maintain "backward compatibility" with older PETSc versions. These MACROs work well and are a fine way to go. Note, there were some big changes, bigger than anything before and I would hope bigger than ever again, in Fortran from 3.7: paths for includes changed to deal with package managers and NULL was replaced by typed NULLS (eg, PETSC_NULL_INTEGER, PETSC_NULL_SCALAR...) to get type checking in Fortran. This touches a lot of code and makes a mess of Macros, but it is doable (I've done it). Either way it is a lot more work than is usually required to maintain PETSc interfaces. On Sat, Mar 7, 2020 at 7:25 AM Patrick Sanan wrote: > I agree that it's better to upgrade whenever possible, but if for reasons > out of your control you find yourself needing to support multiple versions > of PETSc, > which span API changes, you can use macros that PETSc provides for you, as > described here: > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PETSC_VERSION.html > > I found this handy in a similar situation to yours. There was one > lingering production cluster that had an old version of PETSc in a > very-convenient-for-users module, even though everywhere else we could use > the latest. > > #if PETSC_VERSION_GE(3,9,0) > call PCFactorSetUpMatSolverType(pc,ierr);CHKERRQ(ierr); > #else > call PCFactorSetUpMatSolverPackage(pc,ierr);CHKERRQ(ierr); ! PETSc 3.8 and > earlier > #endif > > > Am Fr., 6. M?rz 2020 um 19:15 Uhr schrieb Matthew Knepley < > knepley at gmail.com>: > >> On Fri, Mar 6, 2020 at 12:16 PM baikadi pranay >> wrote: >> >>> Hello PETSc users, >>> >>> We have a FORTRAN code which uses eigenvalue solver routines. The >>> version of PETSc/SLEPc used is 3.11.1.We would be deploying the code on a >>> cluster which has PETSc/SLEPc version 3.6.4. We were wondering if we need >>> to change any functions or if there is backwards compatibility. >>> >>> Please let me know if you need any further information. >>> >> >> 3.6 is 5 years old. It would be easier to just install the new version on >> the cluster. That would take 20min max. >> >> However, if you would really like to make it compatible, you can look at >> the list of changes here: >> https://www.mcs.anl.gov/petsc/documentation/changes/index.html >> >> Thanks, >> >> Matt >> >> >>> Thank you, >>> Pranay. >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.rochan at gmail.com Mon Mar 9 14:02:43 2020 From: u.rochan at gmail.com (Rochan Upadhyay) Date: Mon, 9 Mar 2020 14:02:43 -0500 Subject: [petsc-users] SuperLU_Dist bug or "intentional error" Message-ID: Dear PETSc Developers, I am having trouble interfacing SuperLU_Dist as a direct solver for certain problems in PETSc. The problem is that when interfacing with SuperLU_Dist, you need your matrix to be of Type MPISEQAIJ when running MPI with one processor. PETSc has long allowed the use of Matrix type MPIAIJ for all MPI runs, including MPI with a single processor and that is still the case for all of PETSc's native solvers. This however has been broken for the SuperLU_Dist option. The following code snippet (in PETSc and not SuperLU_Dist) is responsible for this restriction and I do not know if it is by design or accident : In file petsc-3.12.4/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c line 257 onwards : ierr = MPI_Comm_size(PetscObjectComm((PetscObject)A),&size);CHKERRQ(ierr); if (size == 1) { aa = (Mat_SeqAIJ*)A->data; rstart = 0; nz = aa->nz; } else { Mat_MPIAIJ *mat = (Mat_MPIAIJ*)A->data; aa = (Mat_SeqAIJ*)(mat->A)->data; bb = (Mat_SeqAIJ*)(mat->B)->data; ai = aa->i; aj = aa->j; bi = bb->i; bj = bb->j; The code seems to check for number of processors and if it is = 1 conclude that the matrix is a Mat_SeqAIJ and perform some operations. Only if number-of-procs > 1 then it assumes that matrix is of type Mat_MPIAIJ. I think this is unwieldy and lacks generality. One would like the same piece of code to run in MPI mode for all processors with type Mat_MPIAIJ. Also this restriction has suddenly appeared in a recent version. The issue was not there until at least 3.9.4. So my question is from now (e.g. v12.4) on, should we always use matrix type Mat_SeqAIJ when running on 1 processor even with MPI enabled and use Mat_MPIAIJ for more than 1 processor. That is use the number of processors in use as a criterion to set the matrix type ? A an illustration, I have attached a minor modification of KSP example 12, that used to work with all PETSc versions until at least 3.9.4 but now throws a segmentation fault. It was compiled with MPI and run with mpiexec -n 1 ./ex12 If I remove the "ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);" it is okay. I hope you can clarify my confusion. Regards, Rochan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex12.c Type: text/x-csrc Size: 7902 bytes Desc: not available URL: From jczhang at mcs.anl.gov Mon Mar 9 14:22:10 2020 From: jczhang at mcs.anl.gov (Junchao Zhang) Date: Mon, 9 Mar 2020 14:22:10 -0500 Subject: [petsc-users] SuperLU_Dist bug or "intentional error" In-Reply-To: References: Message-ID: Could you try the master branch since it seems Stefano fixed this problem recently? --Junchao Zhang On Mon, Mar 9, 2020 at 2:04 PM Rochan Upadhyay wrote: > Dear PETSc Developers, > > I am having trouble interfacing SuperLU_Dist as a direct solver for > certain problems in PETSc. The problem is that when interfacing with > SuperLU_Dist, you need your matrix to be of Type MPISEQAIJ when running MPI > with one processor. PETSc has long allowed the use of Matrix type MPIAIJ > for all MPI runs, including MPI with a single processor and that is still > the case for all of PETSc's native solvers. This however has been broken > for the SuperLU_Dist option. The following code snippet (in PETSc and not > SuperLU_Dist) is responsible for this restriction and I do not know if it > is by design or accident : > > In file petsc-3.12.4/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > line 257 onwards : > > ierr = MPI_Comm_size(PetscObjectComm((PetscObject)A),&size);CHKERRQ(ierr); > > if (size == 1) { > aa = (Mat_SeqAIJ*)A->data; > rstart = 0; > nz = aa->nz; > } else { > Mat_MPIAIJ *mat = (Mat_MPIAIJ*)A->data; > aa = (Mat_SeqAIJ*)(mat->A)->data; > bb = (Mat_SeqAIJ*)(mat->B)->data; > ai = aa->i; aj = aa->j; > bi = bb->i; bj = bb->j; > > The code seems to check for number of processors and if it is = 1 conclude > that the matrix is a Mat_SeqAIJ and perform some operations. Only if > number-of-procs > 1 then it assumes that matrix is of type Mat_MPIAIJ. I > think this is unwieldy and lacks generality. One would like the same piece > of code to run in MPI mode for all processors with type Mat_MPIAIJ. Also > this restriction has suddenly appeared in a recent version. The issue was > not there until at least 3.9.4. So my question is from now (e.g. v12.4) on, > should we always use matrix type Mat_SeqAIJ when running on 1 processor > even with MPI enabled and use Mat_MPIAIJ for more than 1 processor. That is > use the number of processors in use as a criterion to set the matrix type ? > > A an illustration, I have attached a minor modification of KSP example 12, > that used to work with all PETSc versions until at least 3.9.4 but now > throws a segmentation fault. It was compiled with MPI and run with mpiexec > -n 1 ./ex12 > If I remove the "ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);" it is okay. > > I hope you can clarify my confusion. > > Regards, > Rochan > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Zane.Jakobs at colorado.edu Mon Mar 9 14:46:32 2020 From: Zane.Jakobs at colorado.edu (Zane Charles Jakobs) Date: Mon, 9 Mar 2020 13:46:32 -0600 Subject: [petsc-users] Discrepancy between valgrind and -malloc_dump output Message-ID: Hi PETSc devs, I have a C++ program written with PETSc (code available at https://github.com/DiffeoInvariant/NLTS/blob/master/examples/mutual_opt.cpp), and if I run it with -malloc_debug and -malloc_dump enabled, PETSc says no memory is left unfreed at exit. However, running with valgrind --leak-check=full --show-leak-kinds=all gives me this leak summary: LEAK SUMMARY: ==15364== definitely lost: 0 bytes in 0 blocks ==15364== indirectly lost: 0 bytes in 0 blocks ==15364== possibly lost: 0 bytes in 0 blocks ==15364== still reachable: 610,904 bytes in 8,246 blocks ==15364== suppressed: 0 bytes in 0 blocks The information from --show-leak-kinds=all gives several blocks of the form (this is the last block printed; I've attached the full output from valgrind to this email) ==15364== 329,080 bytes in 3 blocks are still reachable in loss record 7 of 7 ==15364== at 0x483CFAF: realloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==15364== by 0x7BDE1A7: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==15364== by 0x7BDE41C: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==15364== by 0x7BDE579: pci_device_get_device_name (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==15364== by 0x759CFC4: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==15364== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==15364== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==15364== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==15364== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==15364== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==15364== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==15364== by 0x4023CB: main (mutual_opt.cpp:30) The function nlts::io::Initialize that's referenced at the bottom is simply PetscErrorCode nlts::io::Initialize(int *argc, char ***argv, const char file[], const char help[]) { PetscFunctionBeginUser; auto ierr = PetscInitialize(argc, argv, file, help);CHKERRQ(ierr); PetscFunctionReturn(ierr); } Is valgrind actually detecting unfreed memory here, or is this something I don't need to worry about? Thanks! -Zane Jakobs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ==16115== Memcheck, a memory error detector ==16115== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==16115== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info ==16115== Command: ../bin/mutual-opt -f trajectory_data_1/data_x.dat -o mutual_info.txt -D 300 -malloc_debug -malloc_dump ==16115== Minimizing mutual information. Tau that minimizes mutual information: 153. WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! There is one unused database option. It is: Option left: name:-malloc_debug (no value) ==16115== ==16115== HEAP SUMMARY: ==16115== in use at exit: 610,904 bytes in 8,246 blocks ==16115== total heap usage: 19,334 allocs, 11,069 frees, 1,134,882,918 bytes allocated ==16115== ==16115== 76 bytes in 3 blocks are still reachable in loss record 1 of 7 ==16115== at 0x483A7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==16115== by 0x77A353E: strdup (strdup.c:42) ==16115== by 0x7BDE371: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE469: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x759CF76: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==16115== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==16115== by 0x4023CB: main (mutual_opt.cpp:30) ==16115== ==16115== 96 bytes in 3 blocks are still reachable in loss record 2 of 7 ==16115== at 0x483CD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==16115== by 0x7BDDF86: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE43D: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x759CF76: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==16115== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==16115== by 0x4023CB: main (mutual_opt.cpp:30) ==16115== ==16115== 136 bytes in 1 blocks are still reachable in loss record 3 of 7 ==16115== at 0x483CD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==16115== by 0x7BDDFAE: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE43D: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x759CF76: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==16115== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==16115== by 0x4023CB: main (mutual_opt.cpp:30) ==16115== ==16115== 1,224 bytes in 9 blocks are still reachable in loss record 4 of 7 ==16115== at 0x483CD99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==16115== by 0x7BDDF52: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE43D: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x759CF76: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==16115== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==16115== by 0x4023CB: main (mutual_opt.cpp:30) ==16115== ==16115== 104,012 bytes in 4,245 blocks are still reachable in loss record 5 of 7 ==16115== at 0x483A7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==16115== by 0x77A353E: strdup (strdup.c:42) ==16115== by 0x7BDE220: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE41C: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE579: pci_device_get_device_name (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x759CFC4: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==16115== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==16115== ==16115== 176,280 bytes in 3,982 blocks are still reachable in loss record 6 of 7 ==16115== at 0x483A7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==16115== by 0x77A353E: strdup (strdup.c:42) ==16115== by 0x7BDE317: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE41C: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE579: pci_device_get_device_name (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x759CFC4: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==16115== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==16115== ==16115== 329,080 bytes in 3 blocks are still reachable in loss record 7 of 7 ==16115== at 0x483CFAF: realloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==16115== by 0x7BDE1A7: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE41C: ??? (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x7BDE579: pci_device_get_device_name (in /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) ==16115== by 0x759CFC4: hwloc_look_pci (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571D0B: hwloc_discover (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7571637: hwloc_topology_load (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x736780A: MPIR_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x7367435: PMPI_Init_thread (in /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) ==16115== by 0x491CABA: PetscInitialize (in /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) ==16115== by 0x724AB0E: nlts::io::Initialize(int*, char***, char const*, char const*) (io.cpp:126) ==16115== by 0x4023CB: main (mutual_opt.cpp:30) ==16115== ==16115== LEAK SUMMARY: ==16115== definitely lost: 0 bytes in 0 blocks ==16115== indirectly lost: 0 bytes in 0 blocks ==16115== possibly lost: 0 bytes in 0 blocks ==16115== still reachable: 610,904 bytes in 8,246 blocks ==16115== suppressed: 0 bytes in 0 blocks ==16115== ==16115== For lists of detected and suppressed errors, rerun with: -s ==16115== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) From jczhang at mcs.anl.gov Mon Mar 9 14:51:37 2020 From: jczhang at mcs.anl.gov (Junchao Zhang) Date: Mon, 9 Mar 2020 14:51:37 -0500 Subject: [petsc-users] SuperLU_Dist bug or "intentional error" In-Reply-To: References: Message-ID: Let me try it. BTW, did you find the same code at https://gitlab.com/petsc/petsc/-/blob/master/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c --Junchao Zhang On Mon, Mar 9, 2020 at 2:46 PM Rochan Upadhyay wrote: > Hi Junchao, > I doubt if it was fixed as diff of the > src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c between master-branch and > version 12.4 shows no changes. > I am unable to compile the master version (configure.log attached) but I > think you can recreate the problem by running the ex12.c program that > I attached on my previous mail. > Regards, > Rochan > > On Mon, Mar 9, 2020 at 2:22 PM Junchao Zhang wrote: > >> Could you try the master branch since it seems Stefano fixed this problem >> recently? >> --Junchao Zhang >> >> >> On Mon, Mar 9, 2020 at 2:04 PM Rochan Upadhyay >> wrote: >> >>> Dear PETSc Developers, >>> >>> I am having trouble interfacing SuperLU_Dist as a direct solver for >>> certain problems in PETSc. The problem is that when interfacing with >>> SuperLU_Dist, you need your matrix to be of Type MPISEQAIJ when running MPI >>> with one processor. PETSc has long allowed the use of Matrix type MPIAIJ >>> for all MPI runs, including MPI with a single processor and that is still >>> the case for all of PETSc's native solvers. This however has been broken >>> for the SuperLU_Dist option. The following code snippet (in PETSc and not >>> SuperLU_Dist) is responsible for this restriction and I do not know if it >>> is by design or accident : >>> >>> In file petsc-3.12.4/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c >>> line 257 onwards : >>> >>> ierr = >>> MPI_Comm_size(PetscObjectComm((PetscObject)A),&size);CHKERRQ(ierr); >>> >>> if (size == 1) { >>> aa = (Mat_SeqAIJ*)A->data; >>> rstart = 0; >>> nz = aa->nz; >>> } else { >>> Mat_MPIAIJ *mat = (Mat_MPIAIJ*)A->data; >>> aa = (Mat_SeqAIJ*)(mat->A)->data; >>> bb = (Mat_SeqAIJ*)(mat->B)->data; >>> ai = aa->i; aj = aa->j; >>> bi = bb->i; bj = bb->j; >>> >>> The code seems to check for number of processors and if it is = 1 >>> conclude that the matrix is a Mat_SeqAIJ and perform some operations. Only >>> if number-of-procs > 1 then it assumes that matrix is of type Mat_MPIAIJ. I >>> think this is unwieldy and lacks generality. One would like the same piece >>> of code to run in MPI mode for all processors with type Mat_MPIAIJ. Also >>> this restriction has suddenly appeared in a recent version. The issue was >>> not there until at least 3.9.4. So my question is from now (e.g. v12.4) on, >>> should we always use matrix type Mat_SeqAIJ when running on 1 processor >>> even with MPI enabled and use Mat_MPIAIJ for more than 1 processor. That is >>> use the number of processors in use as a criterion to set the matrix type ? >>> >>> A an illustration, I have attached a minor modification of KSP example >>> 12, that used to work with all PETSc versions until at least 3.9.4 but now >>> throws a segmentation fault. It was compiled with MPI and run with mpiexec >>> -n 1 ./ex12 >>> If I remove the "ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);" it is >>> okay. >>> >>> I hope you can clarify my confusion. >>> >>> Regards, >>> Rochan >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Mon Mar 9 14:59:53 2020 From: jczhang at mcs.anl.gov (Junchao Zhang) Date: Mon, 9 Mar 2020 14:59:53 -0500 Subject: [petsc-users] Discrepancy between valgrind and -malloc_dump output In-Reply-To: References: Message-ID: You might use OpenMPI. From my experience, OpenMPI is not valgrind clear. As you can see the leaks happened in MPI_Init_thread. Try use MPICH, for example, use the --download-mpich option. petsc's -malloc_debug only traces memory allocated by petsc, not OpenMPI. --Junchao Zhang On Mon, Mar 9, 2020 at 2:47 PM Zane Charles Jakobs wrote: > Hi PETSc devs, > > I have a C++ program written with PETSc (code available at > https://github.com/DiffeoInvariant/NLTS/blob/master/examples/mutual_opt.cpp), > and if I run it with -malloc_debug and -malloc_dump enabled, PETSc says no > memory is left unfreed at exit. However, running with valgrind > --leak-check=full --show-leak-kinds=all gives me this leak summary: > > LEAK SUMMARY: > ==15364== definitely lost: 0 bytes in 0 blocks > ==15364== indirectly lost: 0 bytes in 0 blocks > ==15364== possibly lost: 0 bytes in 0 blocks > ==15364== still reachable: 610,904 bytes in 8,246 blocks > ==15364== suppressed: 0 bytes in 0 blocks > > The information from --show-leak-kinds=all gives several blocks of the > form (this is the last block printed; I've attached the full output from > valgrind to this email) > > ==15364== 329,080 bytes in 3 blocks are still reachable in loss record 7 > of 7 > ==15364== at 0x483CFAF: realloc (in > /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) > ==15364== by 0x7BDE1A7: ??? (in > /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) > ==15364== by 0x7BDE41C: ??? (in > /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) > ==15364== by 0x7BDE579: pci_device_get_device_name (in > /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) > ==15364== by 0x759CFC4: hwloc_look_pci (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x7571D0B: hwloc_discover (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x7571637: hwloc_topology_load (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x736780A: MPIR_Init_thread (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x7367435: PMPI_Init_thread (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x491CABA: PetscInitialize (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) > ==15364== by 0x724AB0E: nlts::io::Initialize(int*, char***, char > const*, char const*) (io.cpp:126) > ==15364== by 0x4023CB: main (mutual_opt.cpp:30) > > The function nlts::io::Initialize that's referenced at the bottom is simply > > PetscErrorCode nlts::io::Initialize(int *argc, char ***argv, > const char file[], > const char help[]) > { > PetscFunctionBeginUser; > auto ierr = PetscInitialize(argc, argv, file, help);CHKERRQ(ierr); > PetscFunctionReturn(ierr); > } > > Is valgrind actually detecting unfreed memory here, or is this something I > don't need to worry about? > > Thanks! > > -Zane Jakobs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 9 15:01:22 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 9 Mar 2020 16:01:22 -0400 Subject: [petsc-users] Discrepancy between valgrind and -malloc_dump output In-Reply-To: References: Message-ID: On Mon, Mar 9, 2020 at 3:47 PM Zane Charles Jakobs wrote: > Hi PETSc devs, > > I have a C++ program written with PETSc (code available at > https://github.com/DiffeoInvariant/NLTS/blob/master/examples/mutual_opt.cpp), > and if I run it with -malloc_debug and -malloc_dump enabled, PETSc says no > memory is left unfreed at exit. However, running with valgrind > --leak-check=full --show-leak-kinds=all gives me this leak summary: > > LEAK SUMMARY: > ==15364== definitely lost: 0 bytes in 0 blocks > ==15364== indirectly lost: 0 bytes in 0 blocks > ==15364== possibly lost: 0 bytes in 0 blocks > ==15364== still reachable: 610,904 bytes in 8,246 blocks > ==15364== suppressed: 0 bytes in 0 blocks > > The information from --show-leak-kinds=all gives several blocks of the > form (this is the last block printed; I've attached the full output from > valgrind to this email) > > ==15364== 329,080 bytes in 3 blocks are still reachable in loss record 7 > of 7 > ==15364== at 0x483CFAF: realloc (in > /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) > ==15364== by 0x7BDE1A7: ??? (in > /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) > ==15364== by 0x7BDE41C: ??? (in > /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) > ==15364== by 0x7BDE579: pci_device_get_device_name (in > /usr/lib/x86_64-linux-gnu/libpciaccess.so.0.11.1) > ==15364== by 0x759CFC4: hwloc_look_pci (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x7571D0B: hwloc_discover (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x7571637: hwloc_topology_load (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > hwloc is definitely mallocing memory and not freeing it (not PETSc), but its really small. Thanks, Matt ==15364== by 0x736780A: MPIR_Init_thread (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x7367435: PMPI_Init_thread (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libmpi.so.12.1.8) > ==15364== by 0x491CABA: PetscInitialize (in > /usr/local/petsc/arch-linux-cxx-debug/lib/libpetsc.so.3.012.4) > ==15364== by 0x724AB0E: nlts::io::Initialize(int*, char***, char > const*, char const*) (io.cpp:126) > ==15364== by 0x4023CB: main (mutual_opt.cpp:30) > > The function nlts::io::Initialize that's referenced at the bottom is simply > > PetscErrorCode nlts::io::Initialize(int *argc, char ***argv, > const char file[], > const char help[]) > { > PetscFunctionBeginUser; > auto ierr = PetscInitialize(argc, argv, file, help);CHKERRQ(ierr); > PetscFunctionReturn(ierr); > } > > Is valgrind actually detecting unfreed memory here, or is this something I > don't need to worry about? > > Thanks! > > -Zane Jakobs > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Mon Mar 9 15:27:57 2020 From: jczhang at mcs.anl.gov (Junchao Zhang) Date: Mon, 9 Mar 2020 15:27:57 -0500 Subject: [petsc-users] SuperLU_Dist bug or "intentional error" In-Reply-To: References: Message-ID: I checked and I could ran your test correctly with petsc master. --Junchao Zhang On Mon, Mar 9, 2020 at 2:51 PM Junchao Zhang wrote: > Let me try it. BTW, did you find the same code at > https://gitlab.com/petsc/petsc/-/blob/master/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > --Junchao Zhang > > > On Mon, Mar 9, 2020 at 2:46 PM Rochan Upadhyay wrote: > >> Hi Junchao, >> I doubt if it was fixed as diff of the >> src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c between master-branch and >> version 12.4 shows no changes. >> I am unable to compile the master version (configure.log attached) but I >> think you can recreate the problem by running the ex12.c program that >> I attached on my previous mail. >> Regards, >> Rochan >> >> On Mon, Mar 9, 2020 at 2:22 PM Junchao Zhang wrote: >> >>> Could you try the master branch since it seems Stefano fixed this >>> problem recently? >>> --Junchao Zhang >>> >>> >>> On Mon, Mar 9, 2020 at 2:04 PM Rochan Upadhyay >>> wrote: >>> >>>> Dear PETSc Developers, >>>> >>>> I am having trouble interfacing SuperLU_Dist as a direct solver for >>>> certain problems in PETSc. The problem is that when interfacing with >>>> SuperLU_Dist, you need your matrix to be of Type MPISEQAIJ when running MPI >>>> with one processor. PETSc has long allowed the use of Matrix type MPIAIJ >>>> for all MPI runs, including MPI with a single processor and that is still >>>> the case for all of PETSc's native solvers. This however has been broken >>>> for the SuperLU_Dist option. The following code snippet (in PETSc and not >>>> SuperLU_Dist) is responsible for this restriction and I do not know if it >>>> is by design or accident : >>>> >>>> In file petsc-3.12.4/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c >>>> line 257 onwards : >>>> >>>> ierr = >>>> MPI_Comm_size(PetscObjectComm((PetscObject)A),&size);CHKERRQ(ierr); >>>> >>>> if (size == 1) { >>>> aa = (Mat_SeqAIJ*)A->data; >>>> rstart = 0; >>>> nz = aa->nz; >>>> } else { >>>> Mat_MPIAIJ *mat = (Mat_MPIAIJ*)A->data; >>>> aa = (Mat_SeqAIJ*)(mat->A)->data; >>>> bb = (Mat_SeqAIJ*)(mat->B)->data; >>>> ai = aa->i; aj = aa->j; >>>> bi = bb->i; bj = bb->j; >>>> >>>> The code seems to check for number of processors and if it is = 1 >>>> conclude that the matrix is a Mat_SeqAIJ and perform some operations. Only >>>> if number-of-procs > 1 then it assumes that matrix is of type Mat_MPIAIJ. I >>>> think this is unwieldy and lacks generality. One would like the same piece >>>> of code to run in MPI mode for all processors with type Mat_MPIAIJ. Also >>>> this restriction has suddenly appeared in a recent version. The issue was >>>> not there until at least 3.9.4. So my question is from now (e.g. v12.4) on, >>>> should we always use matrix type Mat_SeqAIJ when running on 1 processor >>>> even with MPI enabled and use Mat_MPIAIJ for more than 1 processor. That is >>>> use the number of processors in use as a criterion to set the matrix type ? >>>> >>>> A an illustration, I have attached a minor modification of KSP example >>>> 12, that used to work with all PETSc versions until at least 3.9.4 but now >>>> throws a segmentation fault. It was compiled with MPI and run with mpiexec >>>> -n 1 ./ex12 >>>> If I remove the "ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);" it is >>>> okay. >>>> >>>> I hope you can clarify my confusion. >>>> >>>> Regards, >>>> Rochan >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Mon Mar 9 15:52:17 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Mon, 9 Mar 2020 23:52:17 +0300 Subject: [petsc-users] SuperLU_Dist bug or "intentional error" In-Reply-To: References: Message-ID: <389CBF1D-6E94-4C14-839D-9EEBD21A9821@gmail.com> Rochan This has been fixed few months ago https://gitlab.com/petsc/petsc/-/commit/c8f76b2f0ecb94de1c6dd38e490dd0a500501954 Here is the relevant code you were mentioning in the new version https://gitlab.com/petsc/petsc/-/blob/master/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c#L274 > On Mar 9, 2020, at 10:51 PM, Junchao Zhang via petsc-users wrote: > > Let me try it. BTW, did you find the same code at https://gitlab.com/petsc/petsc/-/blob/master/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c > --Junchao Zhang > > > On Mon, Mar 9, 2020 at 2:46 PM Rochan Upadhyay > wrote: > Hi Junchao, > I doubt if it was fixed as diff of the src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c between master-branch and version 12.4 shows no changes. > I am unable to compile the master version (configure.log attached) but I think you can recreate the problem by running the ex12.c program that > I attached on my previous mail. > Regards, > Rochan > > On Mon, Mar 9, 2020 at 2:22 PM Junchao Zhang > wrote: > Could you try the master branch since it seems Stefano fixed this problem recently? > --Junchao Zhang > > > On Mon, Mar 9, 2020 at 2:04 PM Rochan Upadhyay > wrote: > Dear PETSc Developers, > > I am having trouble interfacing SuperLU_Dist as a direct solver for certain problems in PETSc. The problem is that when interfacing with SuperLU_Dist, you need your matrix to be of Type MPISEQAIJ when running MPI with one processor. PETSc has long allowed the use of Matrix type MPIAIJ for all MPI runs, including MPI with a single processor and that is still the case for all of PETSc's native solvers. This however has been broken for the SuperLU_Dist option. The following code snippet (in PETSc and not SuperLU_Dist) is responsible for this restriction and I do not know if it is by design or accident : > > In file petsc-3.12.4/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c line 257 onwards : > > ierr = MPI_Comm_size(PetscObjectComm((PetscObject)A),&size);CHKERRQ(ierr); > > if (size == 1) { > aa = (Mat_SeqAIJ*)A->data; > rstart = 0; > nz = aa->nz; > } else { > Mat_MPIAIJ *mat = (Mat_MPIAIJ*)A->data; > aa = (Mat_SeqAIJ*)(mat->A)->data; > bb = (Mat_SeqAIJ*)(mat->B)->data; > ai = aa->i; aj = aa->j; > bi = bb->i; bj = bb->j; > > The code seems to check for number of processors and if it is = 1 conclude that the matrix is a Mat_SeqAIJ and perform some operations. Only if number-of-procs > 1 then it assumes that matrix is of type Mat_MPIAIJ. I think this is unwieldy and lacks generality. One would like the same piece of code to run in MPI mode for all processors with type Mat_MPIAIJ. Also this restriction has suddenly appeared in a recent version. The issue was not there until at least 3.9.4. So my question is from now (e.g. v12.4) on, should we always use matrix type Mat_SeqAIJ when running on 1 processor even with MPI enabled and use Mat_MPIAIJ for more than 1 processor. That is use the number of processors in use as a criterion to set the matrix type ? > > A an illustration, I have attached a minor modification of KSP example 12, that used to work with all PETSc versions until at least 3.9.4 but now throws a segmentation fault. It was compiled with MPI and run with mpiexec -n 1 ./ex12 > If I remove the "ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);" it is okay. > > I hope you can clarify my confusion. > > Regards, > Rochan > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Mar 10 06:31:30 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 10 Mar 2020 07:31:30 -0400 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: Message-ID: Hi Cameron, This can go on the list and we always want the configure.log file. I build on Summit, but have not used the XL compilers. I've built 3.7.7 with GNU and PGI. (XGC usually wants PGI) On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith wrote: > Hello, > > I'm installing petsc 3.7.7 on a summit like system with the following > spack spec: > > petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > > with the XL 16.1.1 compiler and Spectrum MPI 10.3 . This install > produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` file > that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and > 'PETSC_WITH_EXTERNAL_LIB' variables. > > The application I'm building, XGC, has a makefile based build system > that includes '/path/to/petsc/install/lib/petsc/conf/variables' which in > turn includes '/lib/petsc/conf/petscvariables'. > > From what I can tell, xlomp_ser is a serial implementation of the > openmp library. When XGC links this library it satisfies the openmp > symbols XGC wants and at run time results in openmp API calls like > 'omp_get_max_threads()' returning 1 regardless of the OMP_NUM_THREADS > setting. > > Do you know how I can build petsc, with or without spack, and avoid this > library being listed in 'lib/petsc/conf/petscvariables'? > > If this should go to a petsc mailing list or git repo issues page I can > send it there. > > Thank-you, > Cameron > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smithc11 at rpi.edu Tue Mar 10 07:07:16 2020 From: smithc11 at rpi.edu (Cameron Smith) Date: Tue, 10 Mar 2020 08:07:16 -0400 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: Message-ID: Thank you Mark. The configure.log is attached. Please let me know if any other info is needed. -Cameron On 3/10/20 7:31 AM, Mark Adams wrote: > Hi?Cameron, > > This can go on the list and we always want the configure.log file. > > I build on Summit, but have not used the XL compilers. I've built 3.7.7 > with GNU and PGI. (XGC usually wants PGI) > > > > On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith > wrote: > > Hello, > > I'm installing petsc 3.7.7 on a summit like system with the following > spack spec: > > petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > > with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install > produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` file > that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and > 'PETSC_WITH_EXTERNAL_LIB' variables. > > The application I'm building, XGC, has a makefile based build system > that includes '/path/to/petsc/install/lib/petsc/conf/variables' > which in > turn includes '/lib/petsc/conf/petscvariables'. > > ?From what I can tell, xlomp_ser is a serial implementation of the > openmp library.? When XGC links this library it satisfies the openmp > symbols XGC wants and at run time results in openmp API calls like > 'omp_get_max_threads()' returning 1 regardless of the OMP_NUM_THREADS > setting. > > Do you know how I can build petsc, with or without spack, and avoid > this > library being listed in 'lib/petsc/conf/petscvariables'? > > If this should go to a petsc mailing list or git repo issues page I can > send it there. > > Thank-you, > Cameron > -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 1992943 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Mar 10 07:55:16 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2020 07:55:16 -0500 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: Message-ID: Cameron, You can try changing following petsc configure options and see if that works.. [i.e build petsc manually] CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp Satish On Tue, 10 Mar 2020, Cameron Smith wrote: > Thank you Mark. > > The configure.log is attached. > > Please let me know if any other info is needed. > > -Cameron > > On 3/10/20 7:31 AM, Mark Adams wrote: > > Hi?Cameron, > > > > This can go on the list and we always want the configure.log file. > > > > I build on Summit, but have not used the XL compilers. I've built 3.7.7 with > > GNU and PGI. (XGC usually wants PGI) > > > > > > > > On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith > > wrote: > > > > Hello, > > > > I'm installing petsc 3.7.7 on a summit like system with the following > > spack spec: > > > > petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > > > > with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install > > produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` file > > that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and > > 'PETSC_WITH_EXTERNAL_LIB' variables. > > > > The application I'm building, XGC, has a makefile based build system > > that includes '/path/to/petsc/install/lib/petsc/conf/variables' > > which in > > turn includes '/lib/petsc/conf/petscvariables'. > > > > ?From what I can tell, xlomp_ser is a serial implementation of the > > openmp library.? When XGC links this library it satisfies the openmp > > symbols XGC wants and at run time results in openmp API calls like > > 'omp_get_max_threads()' returning 1 regardless of the OMP_NUM_THREADS > > setting. > > > > Do you know how I can build petsc, with or without spack, and avoid > > this > > library being listed in 'lib/petsc/conf/petscvariables'? > > > > If this should go to a petsc mailing list or git repo issues page I can > > send it there. > > > > Thank-you, > > Cameron > > > > From balay at mcs.anl.gov Tue Mar 10 08:01:58 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2020 08:01:58 -0500 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: Message-ID: BTW: You might be able to do the same via spack. spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp fflags=-fopenmp cxxflags=-fopenmp Satish On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > Cameron, > > You can try changing following petsc configure options and see if that works.. [i.e build petsc manually] > > CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp > > Satish > > On Tue, 10 Mar 2020, Cameron Smith wrote: > > > Thank you Mark. > > > > The configure.log is attached. > > > > Please let me know if any other info is needed. > > > > -Cameron > > > > On 3/10/20 7:31 AM, Mark Adams wrote: > > > Hi?Cameron, > > > > > > This can go on the list and we always want the configure.log file. > > > > > > I build on Summit, but have not used the XL compilers. I've built 3.7.7 with > > > GNU and PGI. (XGC usually wants PGI) > > > > > > > > > > > > On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith > > > wrote: > > > > > > Hello, > > > > > > I'm installing petsc 3.7.7 on a summit like system with the following > > > spack spec: > > > > > > petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > > > > > > with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install > > > produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` file > > > that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and > > > 'PETSC_WITH_EXTERNAL_LIB' variables. > > > > > > The application I'm building, XGC, has a makefile based build system > > > that includes '/path/to/petsc/install/lib/petsc/conf/variables' > > > which in > > > turn includes '/lib/petsc/conf/petscvariables'. > > > > > > ?From what I can tell, xlomp_ser is a serial implementation of the > > > openmp library.? When XGC links this library it satisfies the openmp > > > symbols XGC wants and at run time results in openmp API calls like > > > 'omp_get_max_threads()' returning 1 regardless of the OMP_NUM_THREADS > > > setting. > > > > > > Do you know how I can build petsc, with or without spack, and avoid > > > this > > > library being listed in 'lib/petsc/conf/petscvariables'? > > > > > > If this should go to a petsc mailing list or git repo issues page I can > > > send it there. > > > > > > Thank-you, > > > Cameron > > > > > > > > From smithc11 at rpi.edu Tue Mar 10 08:09:15 2020 From: smithc11 at rpi.edu (Cameron Smith) Date: Tue, 10 Mar 2020 09:09:15 -0400 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: Message-ID: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> Thank you. I'll give that a shot. Out of curiosity, how does passing the openmp flags relate to the '--with-openmp' option described here: https://www.mcs.anl.gov/petsc/documentation/installation.html under 'Installing packages that utilize OpenMP'? Is this just passing the openmp flags into compile/link commands of the packages that petsc builds (via, --download- options) and not to the petsc compile/link? -Cameron On 3/10/20 9:01 AM, Satish Balay wrote: > BTW: You might be able to do the same via spack. > > spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp fflags=-fopenmp cxxflags=-fopenmp > > Satish > > On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > >> Cameron, >> >> You can try changing following petsc configure options and see if that works.. [i.e build petsc manually] >> >> CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp >> >> Satish >> >> On Tue, 10 Mar 2020, Cameron Smith wrote: >> >>> Thank you Mark. >>> >>> The configure.log is attached. >>> >>> Please let me know if any other info is needed. >>> >>> -Cameron >>> >>> On 3/10/20 7:31 AM, Mark Adams wrote: >>>> Hi?Cameron, >>>> >>>> This can go on the list and we always want the configure.log file. >>>> >>>> I build on Summit, but have not used the XL compilers. I've built 3.7.7 with >>>> GNU and PGI. (XGC usually wants PGI) >>>> >>>> >>>> >>>> On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith >>> > wrote: >>>> >>>> Hello, >>>> >>>> I'm installing petsc 3.7.7 on a summit like system with the following >>>> spack spec: >>>> >>>> petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist >>>> >>>> with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install >>>> produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` file >>>> that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and >>>> 'PETSC_WITH_EXTERNAL_LIB' variables. >>>> >>>> The application I'm building, XGC, has a makefile based build system >>>> that includes '/path/to/petsc/install/lib/petsc/conf/variables' >>>> which in >>>> turn includes '/lib/petsc/conf/petscvariables'. >>>> >>>> ?From what I can tell, xlomp_ser is a serial implementation of the >>>> openmp library.? When XGC links this library it satisfies the openmp >>>> symbols XGC wants and at run time results in openmp API calls like >>>> 'omp_get_max_threads()' returning 1 regardless of the OMP_NUM_THREADS >>>> setting. >>>> >>>> Do you know how I can build petsc, with or without spack, and avoid >>>> this >>>> library being listed in 'lib/petsc/conf/petscvariables'? >>>> >>>> If this should go to a petsc mailing list or git repo issues page I can >>>> send it there. >>>> >>>> Thank-you, >>>> Cameron >>>> >>> >>> >> From balay at mcs.anl.gov Tue Mar 10 08:25:58 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2020 08:25:58 -0500 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> References: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> Message-ID: PETSc configure attempts to check for compiler libraries [so that one can mix and use c,c++,fortran codes with a c linker] by running compilers in verbose mode - and parsing the output. [i.e checkCLibraries(), checkFortranLibraries() ..] Here IBM compiler is using different library internally based on 'xlc -V' vs 'xlc -fopenmp -V'. For most other compilers - its just an additional library [with corresponding include file] So -fopenmp needs to be set before this step [checkCLibraries()] in configure. PETSc configure treats -with-openmp as an additional package - and process this option after the above checkCLibraries() check. However CFLAGS etc get processed and set before the call to checkCLibraries(). My suggestion is a workaround to get -fopenmp option set before checkCLibraries() are called. Satish On Tue, 10 Mar 2020, Cameron Smith wrote: > Thank you. I'll give that a shot. > > Out of curiosity, how does passing the openmp flags relate to the > '--with-openmp' option described here: > > https://www.mcs.anl.gov/petsc/documentation/installation.html > > under 'Installing packages that utilize OpenMP'? Is this just passing the > openmp flags into compile/link commands of the packages that petsc builds > (via, --download- options) and not to the petsc compile/link? > > -Cameron > > On 3/10/20 9:01 AM, Satish Balay wrote: > > BTW: You might be able to do the same via spack. > > > > spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp > > fflags=-fopenmp cxxflags=-fopenmp > > > > Satish > > > > On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > > > >> Cameron, > >> > >> You can try changing following petsc configure options and see if that > >> works.. [i.e build petsc manually] > >> > >> CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp > >> > >> Satish > >> > >> On Tue, 10 Mar 2020, Cameron Smith wrote: > >> > >>> Thank you Mark. > >>> > >>> The configure.log is attached. > >>> > >>> Please let me know if any other info is needed. > >>> > >>> -Cameron > >>> > >>> On 3/10/20 7:31 AM, Mark Adams wrote: > >>>> Hi?Cameron, > >>>> > >>>> This can go on the list and we always want the configure.log file. > >>>> > >>>> I build on Summit, but have not used the XL compilers. I've built 3.7.7 > >>>> with > >>>> GNU and PGI. (XGC usually wants PGI) > >>>> > >>>> > >>>> > >>>> On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith >>>> > wrote: > >>>> > >>>> Hello, > >>>> > >>>> I'm installing petsc 3.7.7 on a summit like system with the > >>>> following > >>>> spack spec: > >>>> > >>>> petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > >>>> > >>>> with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install > >>>> produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` > >>>> file > >>>> that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and > >>>> 'PETSC_WITH_EXTERNAL_LIB' variables. > >>>> > >>>> The application I'm building, XGC, has a makefile based build system > >>>> that includes '/path/to/petsc/install/lib/petsc/conf/variables' > >>>> which in > >>>> turn includes '/lib/petsc/conf/petscvariables'. > >>>> > >>>> ?From what I can tell, xlomp_ser is a serial implementation of the > >>>> openmp library.? When XGC links this library it satisfies the openmp > >>>> symbols XGC wants and at run time results in openmp API calls like > >>>> 'omp_get_max_threads()' returning 1 regardless of the > >>>> OMP_NUM_THREADS > >>>> setting. > >>>> > >>>> Do you know how I can build petsc, with or without spack, and avoid > >>>> this > >>>> library being listed in 'lib/petsc/conf/petscvariables'? > >>>> > >>>> If this should go to a petsc mailing list or git repo issues page I > >>>> can > >>>> send it there. > >>>> > >>>> Thank-you, > >>>> Cameron > >>>> > >>> > >>> > >> > > From smithc11 at rpi.edu Tue Mar 10 08:27:02 2020 From: smithc11 at rpi.edu (Cameron Smith) Date: Tue, 10 Mar 2020 09:27:02 -0400 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> Message-ID: <1dc42c3c-c92f-3c6b-0111-a4293550e4b7@rpi.edu> That makes sense. Thank you. -Cameron On 3/10/20 9:25 AM, Satish Balay wrote: > PETSc configure attempts to check for compiler libraries [so that one > can mix and use c,c++,fortran codes with a c linker] by running > compilers in verbose mode - and parsing the output. [i.e checkCLibraries(), checkFortranLibraries() ..] > > Here IBM compiler is using different library internally based on 'xlc -V' vs 'xlc -fopenmp -V'. For most other compilers - its just an additional library [with corresponding include file] > > So -fopenmp needs to be set before this step [checkCLibraries()] in configure. > > PETSc configure treats -with-openmp as an additional package - and process this option after the above checkCLibraries() check. > > However CFLAGS etc get processed and set before the call to checkCLibraries(). > > My suggestion is a workaround to get -fopenmp option set before checkCLibraries() are called. > > Satish > > On Tue, 10 Mar 2020, Cameron Smith wrote: > >> Thank you. I'll give that a shot. >> >> Out of curiosity, how does passing the openmp flags relate to the >> '--with-openmp' option described here: >> >> https://www.mcs.anl.gov/petsc/documentation/installation.html >> >> under 'Installing packages that utilize OpenMP'? Is this just passing the >> openmp flags into compile/link commands of the packages that petsc builds >> (via, --download- options) and not to the petsc compile/link? >> >> -Cameron >> >> On 3/10/20 9:01 AM, Satish Balay wrote: >>> BTW: You might be able to do the same via spack. >>> >>> spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp >>> fflags=-fopenmp cxxflags=-fopenmp >>> >>> Satish >>> >>> On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: >>> >>>> Cameron, >>>> >>>> You can try changing following petsc configure options and see if that >>>> works.. [i.e build petsc manually] >>>> >>>> CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp >>>> >>>> Satish >>>> >>>> On Tue, 10 Mar 2020, Cameron Smith wrote: >>>> >>>>> Thank you Mark. >>>>> >>>>> The configure.log is attached. >>>>> >>>>> Please let me know if any other info is needed. >>>>> >>>>> -Cameron >>>>> >>>>> On 3/10/20 7:31 AM, Mark Adams wrote: >>>>>> Hi?Cameron, >>>>>> >>>>>> This can go on the list and we always want the configure.log file. >>>>>> >>>>>> I build on Summit, but have not used the XL compilers. I've built 3.7.7 >>>>>> with >>>>>> GNU and PGI. (XGC usually wants PGI) >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith >>>>> > wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> I'm installing petsc 3.7.7 on a summit like system with the >>>>>> following >>>>>> spack spec: >>>>>> >>>>>> petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist >>>>>> >>>>>> with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install >>>>>> produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` >>>>>> file >>>>>> that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and >>>>>> 'PETSC_WITH_EXTERNAL_LIB' variables. >>>>>> >>>>>> The application I'm building, XGC, has a makefile based build system >>>>>> that includes '/path/to/petsc/install/lib/petsc/conf/variables' >>>>>> which in >>>>>> turn includes '/lib/petsc/conf/petscvariables'. >>>>>> >>>>>> ?From what I can tell, xlomp_ser is a serial implementation of the >>>>>> openmp library.? When XGC links this library it satisfies the openmp >>>>>> symbols XGC wants and at run time results in openmp API calls like >>>>>> 'omp_get_max_threads()' returning 1 regardless of the >>>>>> OMP_NUM_THREADS >>>>>> setting. >>>>>> >>>>>> Do you know how I can build petsc, with or without spack, and avoid >>>>>> this >>>>>> library being listed in 'lib/petsc/conf/petscvariables'? >>>>>> >>>>>> If this should go to a petsc mailing list or git repo issues page I >>>>>> can >>>>>> send it there. >>>>>> >>>>>> Thank-you, >>>>>> Cameron >>>>>> >>>>> >>>>> >>>> >> >> From balay at mcs.anl.gov Tue Mar 10 08:43:39 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2020 08:43:39 -0500 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: <1dc42c3c-c92f-3c6b-0111-a4293550e4b7@rpi.edu> References: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> <1dc42c3c-c92f-3c6b-0111-a4293550e4b7@rpi.edu> Message-ID: Another option is to strip out -lxlomp_ser when parsing 'xlc -V' [as its a common library for all 3 language compilers]. i.e the following change [this change is on current maint - but similar change can be done with petsc-3.7].. [This is easier than changing the order in which --with-openmp option is processed] Satish ----------- diff --git a/config/BuildSystem/config/compilers.py b/config/BuildSystem/config/compilers.py index b4cbc183c0..cfffb1a80e 100644 --- a/config/BuildSystem/config/compilers.py +++ b/config/BuildSystem/config/compilers.py @@ -308,7 +308,7 @@ class Configure(config.base.Configure): self.logPrint('already in lflags: '+arg, 4, 'compilers') continue # Check for system libraries - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) if m: self.logPrint('Skipping system library: '+arg, 4, 'compilers') continue @@ -687,7 +687,7 @@ class Configure(config.base.Configure): self.logPrint('already in lflags: '+arg, 4, 'compilers') continue # Check for system libraries - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) if m: self.logPrint('Skipping system library: '+arg, 4, 'compilers') continue @@ -1085,7 +1085,7 @@ Otherwise you need a different combination of C, C++, and Fortran compilers") self.logPrint('Already in lflags so skipping: '+arg, 4, 'compilers') continue # Check for system libraries - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) if m: self.logPrint('Found system library therefore skipping: '+arg, 4, 'compilers') continue On Tue, 10 Mar 2020, Cameron Smith wrote: > That makes sense. Thank you. > > -Cameron > > On 3/10/20 9:25 AM, Satish Balay wrote: > > PETSc configure attempts to check for compiler libraries [so that one > > can mix and use c,c++,fortran codes with a c linker] by running > > compilers in verbose mode - and parsing the output. [i.e checkCLibraries(), > > checkFortranLibraries() ..] > > > > Here IBM compiler is using different library internally based on 'xlc -V' vs > > 'xlc -fopenmp -V'. For most other compilers - its just an additional library > > [with corresponding include file] > > > > So -fopenmp needs to be set before this step [checkCLibraries()] in > > configure. > > > > PETSc configure treats -with-openmp as an additional package - and process > > this option after the above checkCLibraries() check. > > > > However CFLAGS etc get processed and set before the call to > > checkCLibraries(). > > > > My suggestion is a workaround to get -fopenmp option set before > > checkCLibraries() are called. > > > > Satish > > > > On Tue, 10 Mar 2020, Cameron Smith wrote: > > > >> Thank you. I'll give that a shot. > >> > >> Out of curiosity, how does passing the openmp flags relate to the > >> '--with-openmp' option described here: > >> > >> https://www.mcs.anl.gov/petsc/documentation/installation.html > >> > >> under 'Installing packages that utilize OpenMP'? Is this just passing the > >> openmp flags into compile/link commands of the packages that petsc builds > >> (via, --download- options) and not to the petsc compile/link? > >> > >> -Cameron > >> > >> On 3/10/20 9:01 AM, Satish Balay wrote: > >>> BTW: You might be able to do the same via spack. > >>> > >>> spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp > >>> fflags=-fopenmp cxxflags=-fopenmp > >>> > >>> Satish > >>> > >>> On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > >>> > >>>> Cameron, > >>>> > >>>> You can try changing following petsc configure options and see if that > >>>> works.. [i.e build petsc manually] > >>>> > >>>> CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp > >>>> > >>>> Satish > >>>> > >>>> On Tue, 10 Mar 2020, Cameron Smith wrote: > >>>> > >>>>> Thank you Mark. > >>>>> > >>>>> The configure.log is attached. > >>>>> > >>>>> Please let me know if any other info is needed. > >>>>> > >>>>> -Cameron > >>>>> > >>>>> On 3/10/20 7:31 AM, Mark Adams wrote: > >>>>>> Hi?Cameron, > >>>>>> > >>>>>> This can go on the list and we always want the configure.log file. > >>>>>> > >>>>>> I build on Summit, but have not used the XL compilers. I've built 3.7.7 > >>>>>> with > >>>>>> GNU and PGI. (XGC usually wants PGI) > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith >>>>>> > wrote: > >>>>>> > >>>>>> Hello, > >>>>>> > >>>>>> I'm installing petsc 3.7.7 on a summit like system with the > >>>>>> following > >>>>>> spack spec: > >>>>>> > >>>>>> petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > >>>>>> > >>>>>> with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install > >>>>>> produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` > >>>>>> file > >>>>>> that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and > >>>>>> 'PETSC_WITH_EXTERNAL_LIB' variables. > >>>>>> > >>>>>> The application I'm building, XGC, has a makefile based build > >>>>>> system > >>>>>> that includes '/path/to/petsc/install/lib/petsc/conf/variables' > >>>>>> which in > >>>>>> turn includes '/lib/petsc/conf/petscvariables'. > >>>>>> > >>>>>> ?From what I can tell, xlomp_ser is a serial implementation of > >>>>>> the > >>>>>> openmp library.? When XGC links this library it satisfies the > >>>>>> openmp > >>>>>> symbols XGC wants and at run time results in openmp API calls > >>>>>> like > >>>>>> 'omp_get_max_threads()' returning 1 regardless of the > >>>>>> OMP_NUM_THREADS > >>>>>> setting. > >>>>>> > >>>>>> Do you know how I can build petsc, with or without spack, and > >>>>>> avoid > >>>>>> this > >>>>>> library being listed in 'lib/petsc/conf/petscvariables'? > >>>>>> > >>>>>> If this should go to a petsc mailing list or git repo issues page > >>>>>> I > >>>>>> can > >>>>>> send it there. > >>>>>> > >>>>>> Thank-you, > >>>>>> Cameron > >>>>>> > >>>>> > >>>>> > >>>> > >> > >> > From balay at mcs.anl.gov Tue Mar 10 08:48:43 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2020 08:48:43 -0500 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> <1dc42c3c-c92f-3c6b-0111-a4293550e4b7@rpi.edu> Message-ID: Created an MR for this change https://gitlab.com/petsc/petsc/-/merge_requests/2593 Satish On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > Another option is to strip out -lxlomp_ser when parsing 'xlc -V' [as its a common library for all 3 language compilers]. i.e the following change [this change is on current maint - but similar change can be done with petsc-3.7].. > > [This is easier than changing the order in which --with-openmp option is processed] > > Satish > ----------- > > diff --git a/config/BuildSystem/config/compilers.py b/config/BuildSystem/config/compilers.py > index b4cbc183c0..cfffb1a80e 100644 > --- a/config/BuildSystem/config/compilers.py > +++ b/config/BuildSystem/config/compilers.py > @@ -308,7 +308,7 @@ class Configure(config.base.Configure): > self.logPrint('already in lflags: '+arg, 4, 'compilers') > continue > # Check for system libraries > - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) > + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) > if m: > self.logPrint('Skipping system library: '+arg, 4, 'compilers') > continue > @@ -687,7 +687,7 @@ class Configure(config.base.Configure): > self.logPrint('already in lflags: '+arg, 4, 'compilers') > continue > # Check for system libraries > - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) > + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) > if m: > self.logPrint('Skipping system library: '+arg, 4, 'compilers') > continue > @@ -1085,7 +1085,7 @@ Otherwise you need a different combination of C, C++, and Fortran compilers") > self.logPrint('Already in lflags so skipping: '+arg, 4, 'compilers') > continue > # Check for system libraries > - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) > + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) > if m: > self.logPrint('Found system library therefore skipping: '+arg, 4, 'compilers') > continue > > On Tue, 10 Mar 2020, Cameron Smith wrote: > > > That makes sense. Thank you. > > > > -Cameron > > > > On 3/10/20 9:25 AM, Satish Balay wrote: > > > PETSc configure attempts to check for compiler libraries [so that one > > > can mix and use c,c++,fortran codes with a c linker] by running > > > compilers in verbose mode - and parsing the output. [i.e checkCLibraries(), > > > checkFortranLibraries() ..] > > > > > > Here IBM compiler is using different library internally based on 'xlc -V' vs > > > 'xlc -fopenmp -V'. For most other compilers - its just an additional library > > > [with corresponding include file] > > > > > > So -fopenmp needs to be set before this step [checkCLibraries()] in > > > configure. > > > > > > PETSc configure treats -with-openmp as an additional package - and process > > > this option after the above checkCLibraries() check. > > > > > > However CFLAGS etc get processed and set before the call to > > > checkCLibraries(). > > > > > > My suggestion is a workaround to get -fopenmp option set before > > > checkCLibraries() are called. > > > > > > Satish > > > > > > On Tue, 10 Mar 2020, Cameron Smith wrote: > > > > > >> Thank you. I'll give that a shot. > > >> > > >> Out of curiosity, how does passing the openmp flags relate to the > > >> '--with-openmp' option described here: > > >> > > >> https://www.mcs.anl.gov/petsc/documentation/installation.html > > >> > > >> under 'Installing packages that utilize OpenMP'? Is this just passing the > > >> openmp flags into compile/link commands of the packages that petsc builds > > >> (via, --download- options) and not to the petsc compile/link? > > >> > > >> -Cameron > > >> > > >> On 3/10/20 9:01 AM, Satish Balay wrote: > > >>> BTW: You might be able to do the same via spack. > > >>> > > >>> spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp > > >>> fflags=-fopenmp cxxflags=-fopenmp > > >>> > > >>> Satish > > >>> > > >>> On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > > >>> > > >>>> Cameron, > > >>>> > > >>>> You can try changing following petsc configure options and see if that > > >>>> works.. [i.e build petsc manually] > > >>>> > > >>>> CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp > > >>>> > > >>>> Satish > > >>>> > > >>>> On Tue, 10 Mar 2020, Cameron Smith wrote: > > >>>> > > >>>>> Thank you Mark. > > >>>>> > > >>>>> The configure.log is attached. > > >>>>> > > >>>>> Please let me know if any other info is needed. > > >>>>> > > >>>>> -Cameron > > >>>>> > > >>>>> On 3/10/20 7:31 AM, Mark Adams wrote: > > >>>>>> Hi?Cameron, > > >>>>>> > > >>>>>> This can go on the list and we always want the configure.log file. > > >>>>>> > > >>>>>> I build on Summit, but have not used the XL compilers. I've built 3.7.7 > > >>>>>> with > > >>>>>> GNU and PGI. (XGC usually wants PGI) > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith > >>>>>> > wrote: > > >>>>>> > > >>>>>> Hello, > > >>>>>> > > >>>>>> I'm installing petsc 3.7.7 on a summit like system with the > > >>>>>> following > > >>>>>> spack spec: > > >>>>>> > > >>>>>> petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > > >>>>>> > > >>>>>> with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install > > >>>>>> produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` > > >>>>>> file > > >>>>>> that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and > > >>>>>> 'PETSC_WITH_EXTERNAL_LIB' variables. > > >>>>>> > > >>>>>> The application I'm building, XGC, has a makefile based build > > >>>>>> system > > >>>>>> that includes '/path/to/petsc/install/lib/petsc/conf/variables' > > >>>>>> which in > > >>>>>> turn includes '/lib/petsc/conf/petscvariables'. > > >>>>>> > > >>>>>> ?From what I can tell, xlomp_ser is a serial implementation of > > >>>>>> the > > >>>>>> openmp library.? When XGC links this library it satisfies the > > >>>>>> openmp > > >>>>>> symbols XGC wants and at run time results in openmp API calls > > >>>>>> like > > >>>>>> 'omp_get_max_threads()' returning 1 regardless of the > > >>>>>> OMP_NUM_THREADS > > >>>>>> setting. > > >>>>>> > > >>>>>> Do you know how I can build petsc, with or without spack, and > > >>>>>> avoid > > >>>>>> this > > >>>>>> library being listed in 'lib/petsc/conf/petscvariables'? > > >>>>>> > > >>>>>> If this should go to a petsc mailing list or git repo issues page > > >>>>>> I > > >>>>>> can > > >>>>>> send it there. > > >>>>>> > > >>>>>> Thank-you, > > >>>>>> Cameron > > >>>>>> > > >>>>> > > >>>>> > > >>>> > > >> > > >> > > > From perceval.desforges at polytechnique.edu Tue Mar 10 10:31:49 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Tue, 10 Mar 2020 16:31:49 +0100 Subject: [petsc-users] Inertia computation fails In-Reply-To: References: <6B26CC19-497E-4F10-B7F7-82AA2EDBEDDB@dsic.upv.es> Message-ID: Hello again, I've tried following your recommendations, and yet the computation still fails with the same error message even when i take a very large interval (the failed value is at 1.91 and the interval goes from 1.5 to 3). Could anything else be causing this? Thanks again. Best regards, Perceval, > Sorry I hadn't seen that you had responded. Thanks a lot, I'll try that. > > Regards, > > Perceval, > Are you using the EPSKrylovSchurSetDetectZeros() option? As in this example https://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex25.c.html > If so, then the explanation is probably that one of the endpoints of your interval coincides with an eigenvalue. Try with a slightly different interval. > > Jose > > El 5 mar 2020, a las 14:51, Satish Balay via petsc-users escribi?: > > The address here should be petsc-users - not petsc-users-bounces > > balay at sb /home/balay/git-repo/slepc (master=) > $ git grep 'Inertia computation fails' > src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",sr->int1); > src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); > src/pep/impls/krylov/stoar/qslice.c: if (!nconv) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",nzshift); > src/pep/impls/krylov/stoar/qslice.c: if (zeros) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); > > So the message is likely coming from slepc > > Satish > > ---------- Forwarded message ---------- > Date: Thu, 05 Mar 2020 14:43:13 +0100 > From: Perceval Desforges > To: petsc-users > Subject: Inertia computation fails > > Dear PETSc developpers, I am using SLEPC and MUMPS to calculate the eigenvalues of a real > symmetric matrix in an interval. I have come upon a crash and I was > unable to find any documentation on the error I got. The error is: Inertia computation fails in 2.19893 Is this a slepc, a Petsc or a mumps problem? Thanks, Perceval -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Tue Mar 10 10:49:21 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 10 Mar 2020 16:49:21 +0100 Subject: [petsc-users] Inertia computation fails In-Reply-To: References: <6B26CC19-497E-4F10-B7F7-82AA2EDBEDDB@dsic.upv.es> Message-ID: <70E9BA19-DDD4-4C80-8AA0-F8C906116F4F@dsic.upv.es> I need to know the exact code you are using to configure the solver and the command line you use to run the code. Jose > El 10 mar 2020, a las 16:31, Perceval Desforges escribi?: > > Hello again, > > I've tried following your recommendations, and yet the computation still fails with the same error message even when i take a very large interval (the failed value is at 1.91 and the interval goes from 1.5 to 3). > > Could anything else be causing this? > > Thanks again. > > Best regards, > > Perceval, > > > >> Sorry I hadn't seen that you had responded. Thanks a lot, I'll try that. >> >> Regards, >> >> Perceval, >> >> Are you using the EPSKrylovSchurSetDetectZeros() option? As in this example https://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex25.c.html >> If so, then the explanation is probably that one of the endpoints of your interval coincides with an eigenvalue. Try with a slightly different interval. >> >> Jose >> >> >> El 5 mar 2020, a las 14:51, Satish Balay via petsc-users escribi?: >> >> The address here should be petsc-users - not petsc-users-bounces >> >> balay at sb /home/balay/git-repo/slepc (master=) >> $ git grep 'Inertia computation fails' >> src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",sr->int1); >> src/eps/impls/krylov/krylovschur/ks-slice.c: if (zeros) SETERRQ1(((PetscObject)eps)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); >> src/pep/impls/krylov/stoar/qslice.c: if (!nconv) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",nzshift); >> src/pep/impls/krylov/stoar/qslice.c: if (zeros) SETERRQ1(((PetscObject)pep)->comm,PETSC_ERR_CONV_FAILED,"Inertia computation fails in %g",newShift); >> >> So the message is likely coming from slepc >> >> Satish >> >> ---------- Forwarded message ---------- >> Date: Thu, 05 Mar 2020 14:43:13 +0100 >> From: Perceval Desforges >> To: petsc-users >> Subject: Inertia computation fails >> >> Dear PETSc developpers, I am using SLEPC and MUMPS to calculate the eigenvalues of a real >> symmetric matrix in an interval. I have come upon a crash and I was >> unable to find any documentation on the error I got. The error is: Inertia computation fails in 2.19893 Is this a slepc, a Petsc or a mumps problem? Thanks, Perceval >> >> > > From smithc11 at rpi.edu Tue Mar 10 12:21:07 2020 From: smithc11 at rpi.edu (Cameron Smith) Date: Tue, 10 Mar 2020 13:21:07 -0400 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: References: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> <1dc42c3c-c92f-3c6b-0111-a4293550e4b7@rpi.edu> Message-ID: <526e4358-00f7-e4b3-889f-c0aa2c4fe910@rpi.edu> Thank you. Using the spack spec petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-qsmp=omp fflags=-qsmp=omp cxxflags=-qsmp=omp and passing the same flags to a manual (non-spack) petsc 3.7.7 build produced installs that do not list xlomp_ser in the configure.log. Thank-you, Cameron On 3/10/20 9:48 AM, Satish Balay wrote: > Created an MR for this change > > https://gitlab.com/petsc/petsc/-/merge_requests/2593 > > Satish > > On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > >> Another option is to strip out -lxlomp_ser when parsing 'xlc -V' [as its a common library for all 3 language compilers]. i.e the following change [this change is on current maint - but similar change can be done with petsc-3.7].. >> >> [This is easier than changing the order in which --with-openmp option is processed] >> >> Satish >> ----------- >> >> diff --git a/config/BuildSystem/config/compilers.py b/config/BuildSystem/config/compilers.py >> index b4cbc183c0..cfffb1a80e 100644 >> --- a/config/BuildSystem/config/compilers.py >> +++ b/config/BuildSystem/config/compilers.py >> @@ -308,7 +308,7 @@ class Configure(config.base.Configure): >> self.logPrint('already in lflags: '+arg, 4, 'compilers') >> continue >> # Check for system libraries >> - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) >> + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) >> if m: >> self.logPrint('Skipping system library: '+arg, 4, 'compilers') >> continue >> @@ -687,7 +687,7 @@ class Configure(config.base.Configure): >> self.logPrint('already in lflags: '+arg, 4, 'compilers') >> continue >> # Check for system libraries >> - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) >> + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) >> if m: >> self.logPrint('Skipping system library: '+arg, 4, 'compilers') >> continue >> @@ -1085,7 +1085,7 @@ Otherwise you need a different combination of C, C++, and Fortran compilers") >> self.logPrint('Already in lflags so skipping: '+arg, 4, 'compilers') >> continue >> # Check for system libraries >> - m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', arg) >> + m = re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', arg) >> if m: >> self.logPrint('Found system library therefore skipping: '+arg, 4, 'compilers') >> continue >> >> On Tue, 10 Mar 2020, Cameron Smith wrote: >> >>> That makes sense. Thank you. >>> >>> -Cameron >>> >>> On 3/10/20 9:25 AM, Satish Balay wrote: >>>> PETSc configure attempts to check for compiler libraries [so that one >>>> can mix and use c,c++,fortran codes with a c linker] by running >>>> compilers in verbose mode - and parsing the output. [i.e checkCLibraries(), >>>> checkFortranLibraries() ..] >>>> >>>> Here IBM compiler is using different library internally based on 'xlc -V' vs >>>> 'xlc -fopenmp -V'. For most other compilers - its just an additional library >>>> [with corresponding include file] >>>> >>>> So -fopenmp needs to be set before this step [checkCLibraries()] in >>>> configure. >>>> >>>> PETSc configure treats -with-openmp as an additional package - and process >>>> this option after the above checkCLibraries() check. >>>> >>>> However CFLAGS etc get processed and set before the call to >>>> checkCLibraries(). >>>> >>>> My suggestion is a workaround to get -fopenmp option set before >>>> checkCLibraries() are called. >>>> >>>> Satish >>>> >>>> On Tue, 10 Mar 2020, Cameron Smith wrote: >>>> >>>>> Thank you. I'll give that a shot. >>>>> >>>>> Out of curiosity, how does passing the openmp flags relate to the >>>>> '--with-openmp' option described here: >>>>> >>>>> https://www.mcs.anl.gov/petsc/documentation/installation.html >>>>> >>>>> under 'Installing packages that utilize OpenMP'? Is this just passing the >>>>> openmp flags into compile/link commands of the packages that petsc builds >>>>> (via, --download- options) and not to the petsc compile/link? >>>>> >>>>> -Cameron >>>>> >>>>> On 3/10/20 9:01 AM, Satish Balay wrote: >>>>>> BTW: You might be able to do the same via spack. >>>>>> >>>>>> spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp >>>>>> fflags=-fopenmp cxxflags=-fopenmp >>>>>> >>>>>> Satish >>>>>> >>>>>> On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: >>>>>> >>>>>>> Cameron, >>>>>>> >>>>>>> You can try changing following petsc configure options and see if that >>>>>>> works.. [i.e build petsc manually] >>>>>>> >>>>>>> CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp >>>>>>> >>>>>>> Satish >>>>>>> >>>>>>> On Tue, 10 Mar 2020, Cameron Smith wrote: >>>>>>> >>>>>>>> Thank you Mark. >>>>>>>> >>>>>>>> The configure.log is attached. >>>>>>>> >>>>>>>> Please let me know if any other info is needed. >>>>>>>> >>>>>>>> -Cameron >>>>>>>> >>>>>>>> On 3/10/20 7:31 AM, Mark Adams wrote: >>>>>>>>> Hi?Cameron, >>>>>>>>> >>>>>>>>> This can go on the list and we always want the configure.log file. >>>>>>>>> >>>>>>>>> I build on Summit, but have not used the XL compilers. I've built 3.7.7 >>>>>>>>> with >>>>>>>>> GNU and PGI. (XGC usually wants PGI) >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith >>>>>>>> > wrote: >>>>>>>>> >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> I'm installing petsc 3.7.7 on a summit like system with the >>>>>>>>> following >>>>>>>>> spack spec: >>>>>>>>> >>>>>>>>> petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist >>>>>>>>> >>>>>>>>> with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This install >>>>>>>>> produces a `/path/to/petsc/install/lib/petsc/conf/petscvariables` >>>>>>>>> file >>>>>>>>> that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' and >>>>>>>>> 'PETSC_WITH_EXTERNAL_LIB' variables. >>>>>>>>> >>>>>>>>> The application I'm building, XGC, has a makefile based build >>>>>>>>> system >>>>>>>>> that includes '/path/to/petsc/install/lib/petsc/conf/variables' >>>>>>>>> which in >>>>>>>>> turn includes '/lib/petsc/conf/petscvariables'. >>>>>>>>> >>>>>>>>> ?From what I can tell, xlomp_ser is a serial implementation of >>>>>>>>> the >>>>>>>>> openmp library.? When XGC links this library it satisfies the >>>>>>>>> openmp >>>>>>>>> symbols XGC wants and at run time results in openmp API calls >>>>>>>>> like >>>>>>>>> 'omp_get_max_threads()' returning 1 regardless of the >>>>>>>>> OMP_NUM_THREADS >>>>>>>>> setting. >>>>>>>>> >>>>>>>>> Do you know how I can build petsc, with or without spack, and >>>>>>>>> avoid >>>>>>>>> this >>>>>>>>> library being listed in 'lib/petsc/conf/petscvariables'? >>>>>>>>> >>>>>>>>> If this should go to a petsc mailing list or git repo issues page >>>>>>>>> I >>>>>>>>> can >>>>>>>>> send it there. >>>>>>>>> >>>>>>>>> Thank-you, >>>>>>>>> Cameron >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>> >>>>> >>> >> From balay at mcs.anl.gov Tue Mar 10 13:48:12 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2020 13:48:12 -0500 Subject: [petsc-users] installing petsc via spack on ac922 (a summit like system) In-Reply-To: <526e4358-00f7-e4b3-889f-c0aa2c4fe910@rpi.edu> References: <04abf3cc-cd5c-1944-c9f3-2c701e27c3f8@rpi.edu> <1dc42c3c-c92f-3c6b-0111-a4293550e4b7@rpi.edu> <526e4358-00f7-e4b3-889f-c0aa2c4fe910@rpi.edu> Message-ID: Glad it worked! Thanks for the update. Satish On Tue, 10 Mar 2020, Cameron Smith wrote: > Thank you. > > Using the spack spec > > petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-qsmp=omp fflags=-qsmp=omp > cxxflags=-qsmp=omp > > and passing the same flags to a manual (non-spack) petsc 3.7.7 build produced > installs that do not list xlomp_ser in the configure.log. > > Thank-you, > Cameron > > On 3/10/20 9:48 AM, Satish Balay wrote: > > Created an MR for this change > > > > https://gitlab.com/petsc/petsc/-/merge_requests/2593 > > > > Satish > > > > On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > > > >> Another option is to strip out -lxlomp_ser when parsing 'xlc -V' [as its a > >> common library for all 3 language compilers]. i.e the following change > >> [this change is on current maint - but similar change can be done with > >> petsc-3.7].. > >> > >> [This is easier than changing the order in which --with-openmp option is > >> processed] > >> > >> Satish > >> ----------- > >> > >> diff --git a/config/BuildSystem/config/compilers.py > >> b/config/BuildSystem/config/compilers.py > >> index b4cbc183c0..cfffb1a80e 100644 > >> --- a/config/BuildSystem/config/compilers.py > >> +++ b/config/BuildSystem/config/compilers.py > >> @@ -308,7 +308,7 @@ class Configure(config.base.Configure): > >> self.logPrint('already in lflags: '+arg, 4, 'compilers') > >> continue > >> # Check for system libraries > >> - m = > >> re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', > >> arg) > >> + m = > >> re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', > >> arg) > >> if m: > >> self.logPrint('Skipping system library: '+arg, 4, 'compilers') > >> continue > >> @@ -687,7 +687,7 @@ class Configure(config.base.Configure): > >> self.logPrint('already in lflags: '+arg, 4, 'compilers') > >> continue > >> # Check for system libraries > >> - m = > >> re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', > >> arg) > >> + m = > >> re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', > >> arg) > >> if m: > >> self.logPrint('Skipping system library: '+arg, 4, 'compilers') > >> continue > >> @@ -1085,7 +1085,7 @@ Otherwise you need a different combination of C, C++, > >> and Fortran compilers") > >> self.logPrint('Already in lflags so skipping: '+arg, 4, > >> 'compilers') > >> continue > >> # Check for system libraries > >> - m = > >> re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|crt[0-9].[0-9][0-9].[0-9].o)$', > >> arg) > >> + m = > >> re.match(r'^-l(ang.*|crt[0-9].o|crtbegin.o|c|gcc|gcc_ext(.[0-9]+)*|System|cygwin|xlomp_ser|crt[0-9].[0-9][0-9].[0-9].o)$', > >> arg) > >> if m: > >> self.logPrint('Found system library therefore skipping: '+arg, > >> 4, 'compilers') > >> continue > >> > >> On Tue, 10 Mar 2020, Cameron Smith wrote: > >> > >>> That makes sense. Thank you. > >>> > >>> -Cameron > >>> > >>> On 3/10/20 9:25 AM, Satish Balay wrote: > >>>> PETSc configure attempts to check for compiler libraries [so that one > >>>> can mix and use c,c++,fortran codes with a c linker] by running > >>>> compilers in verbose mode - and parsing the output. [i.e > >>>> checkCLibraries(), > >>>> checkFortranLibraries() ..] > >>>> > >>>> Here IBM compiler is using different library internally based on 'xlc -V' > >>>> vs > >>>> 'xlc -fopenmp -V'. For most other compilers - its just an additional > >>>> library > >>>> [with corresponding include file] > >>>> > >>>> So -fopenmp needs to be set before this step [checkCLibraries()] in > >>>> configure. > >>>> > >>>> PETSc configure treats -with-openmp as an additional package - and > >>>> process > >>>> this option after the above checkCLibraries() check. > >>>> > >>>> However CFLAGS etc get processed and set before the call to > >>>> checkCLibraries(). > >>>> > >>>> My suggestion is a workaround to get -fopenmp option set before > >>>> checkCLibraries() are called. > >>>> > >>>> Satish > >>>> > >>>> On Tue, 10 Mar 2020, Cameron Smith wrote: > >>>> > >>>>> Thank you. I'll give that a shot. > >>>>> > >>>>> Out of curiosity, how does passing the openmp flags relate to the > >>>>> '--with-openmp' option described here: > >>>>> > >>>>> https://www.mcs.anl.gov/petsc/documentation/installation.html > >>>>> > >>>>> under 'Installing packages that utilize OpenMP'? Is this just passing > >>>>> the > >>>>> openmp flags into compile/link commands of the packages that petsc > >>>>> builds > >>>>> (via, --download- options) and not to the petsc compile/link? > >>>>> > >>>>> -Cameron > >>>>> > >>>>> On 3/10/20 9:01 AM, Satish Balay wrote: > >>>>>> BTW: You might be able to do the same via spack. > >>>>>> > >>>>>> spack install petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist cflags=-fopenmp > >>>>>> fflags=-fopenmp cxxflags=-fopenmp > >>>>>> > >>>>>> Satish > >>>>>> > >>>>>> On Tue, 10 Mar 2020, Satish Balay via petsc-users wrote: > >>>>>> > >>>>>>> Cameron, > >>>>>>> > >>>>>>> You can try changing following petsc configure options and see if that > >>>>>>> works.. [i.e build petsc manually] > >>>>>>> > >>>>>>> CFLAGS=-fopenmp FFLAGS=-fopenmp CXXFLAGS=-fopenmp > >>>>>>> > >>>>>>> Satish > >>>>>>> > >>>>>>> On Tue, 10 Mar 2020, Cameron Smith wrote: > >>>>>>> > >>>>>>>> Thank you Mark. > >>>>>>>> > >>>>>>>> The configure.log is attached. > >>>>>>>> > >>>>>>>> Please let me know if any other info is needed. > >>>>>>>> > >>>>>>>> -Cameron > >>>>>>>> > >>>>>>>> On 3/10/20 7:31 AM, Mark Adams wrote: > >>>>>>>>> Hi?Cameron, > >>>>>>>>> > >>>>>>>>> This can go on the list and we always want the configure.log file. > >>>>>>>>> > >>>>>>>>> I build on Summit, but have not used the XL compilers. I've built > >>>>>>>>> 3.7.7 > >>>>>>>>> with > >>>>>>>>> GNU and PGI. (XGC usually wants PGI) > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On Mon, Mar 9, 2020 at 11:27 PM Cameron Smith >>>>>>>>> > wrote: > >>>>>>>>> > >>>>>>>>> Hello, > >>>>>>>>> > >>>>>>>>> I'm installing petsc 3.7.7 on a summit like system with the > >>>>>>>>> following > >>>>>>>>> spack spec: > >>>>>>>>> > >>>>>>>>> petsc at 3.7.7 ~hdf5 ~hypre ~superlu-dist > >>>>>>>>> > >>>>>>>>> with the XL 16.1.1 compiler and Spectrum MPI 10.3 .? This > >>>>>>>>> install > >>>>>>>>> produces a > >>>>>>>>> `/path/to/petsc/install/lib/petsc/conf/petscvariables` > >>>>>>>>> file > >>>>>>>>> that contains '-lxlomp_ser' in the 'PETSC_EXTERNAL_LIB_BASIC' > >>>>>>>>> and > >>>>>>>>> 'PETSC_WITH_EXTERNAL_LIB' variables. > >>>>>>>>> > >>>>>>>>> The application I'm building, XGC, has a makefile based build > >>>>>>>>> system > >>>>>>>>> that includes > >>>>>>>>> '/path/to/petsc/install/lib/petsc/conf/variables' > >>>>>>>>> which in > >>>>>>>>> turn includes '/lib/petsc/conf/petscvariables'. > >>>>>>>>> > >>>>>>>>> ?From what I can tell, xlomp_ser is a serial implementation > >>>>>>>>> of > >>>>>>>>> the > >>>>>>>>> openmp library.? When XGC links this library it satisfies the > >>>>>>>>> openmp > >>>>>>>>> symbols XGC wants and at run time results in openmp API calls > >>>>>>>>> like > >>>>>>>>> 'omp_get_max_threads()' returning 1 regardless of the > >>>>>>>>> OMP_NUM_THREADS > >>>>>>>>> setting. > >>>>>>>>> > >>>>>>>>> Do you know how I can build petsc, with or without spack, and > >>>>>>>>> avoid > >>>>>>>>> this > >>>>>>>>> library being listed in 'lib/petsc/conf/petscvariables'? > >>>>>>>>> > >>>>>>>>> If this should go to a petsc mailing list or git repo issues > >>>>>>>>> page > >>>>>>>>> I > >>>>>>>>> can > >>>>>>>>> send it there. > >>>>>>>>> > >>>>>>>>> Thank-you, > >>>>>>>>> Cameron > >>>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>> > >>>>> > >>> > >> > From yann.jobic at univ-amu.fr Wed Mar 11 05:59:48 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Wed, 11 Mar 2020 11:59:48 +0100 Subject: [petsc-users] cmake, pkg-config and Libs.private Message-ID: <2decd489-891b-4623-b8b0-20c3196458f5@univ-amu.fr> Hi all, I'm trying to create a correct CMakeLists.txt in order to compile a petsc program. I can compile my code, but i cannot link it. I have this link command : /local/lib/openmpi/gcc8/4.0.3/bin/mpicc -g -rdynamic CMakeFiles/Test1.dir/src/dvm_dg1D_qff_2dV.c.o CMakeFiles/Test1.dir/src/computeHermCoeff.c.o CMakeFiles/Test1.dir/src/dg1DFuncs.c.o -o Test1 -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetscsys /bin/ld: /local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib/libpetscsys.a(dlimpl.o): r?f?rence au symbole non d?fini ??dlclose@@GLIBC_2.2.5?? //usr/lib64/libdl.so.2?: erreur lors de l'ajout de symboles?: DSO manquant dans la ligne de commande But if i add Libs.private to this line in a terminal to test the command, it works: mpicc -rdynamic CMakeFiles/Test1.dir/src/dvm_dg1D_qff_2dV.c.o CMakeFiles/Test1.dir/src/computeHermCoeff.c.o CMakeFiles/Test1.dir/src/dg1DFuncs.c.o -o Test1 -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetscsys -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib -L/local/lib/openmpi/gcc8/4.0.3/lib -L/usr/lib/gcc/x86_64-redhat-linux/8 -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu -lsuperlu_dist -lml -lp4est -lsc -lflapack -lfblas -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lnetcdf -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -ltriangle -lm -lz -lsz -lX11 -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lrt -lquadmath -lstdc++ -ldl So I should be using Libs.private in the link command, but i don't know how to tell cmake to do so. What is the correct line that i have to add to CMakeLists.txt in order to have Libs and Libs.private from the PETSc.pc file ? What I am doing wrong ? Thanks, Best regards, Yann PS : My CMakeLists.txt so far : cmake_minimum_required(VERSION 2.8.11) list (APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/cmake-modules) set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE) # set root of location to find PETSc configuration set(PETSC $ENV{PETSC_DIR}/$ENV{PETSC_ARCH}) set(ENV{PKG_CONFIG_PATH} ${PETSC}/lib/pkgconfig) # Determine PETSc compilers execute_process ( COMMAND pkg-config PETSc --variable=ccompiler COMMAND tr -d '\n' OUTPUT_VARIABLE C_COMPILER) SET(CMAKE_C_COMPILER ${C_COMPILER}) execute_process ( COMMAND pkg-config PETSc --variable=cxxcompiler COMMAND tr -d '\n' OUTPUT_VARIABLE CXX_COMPILER) if (CXX_COMPILER) SET(CMAKE_CXX_COMPILER ${CXX_COMPILER}) endif (CXX_COMPILER) execute_process ( COMMAND pkg-config PETSc --variable=fcompiler COMMAND tr -d '\n' OUTPUT_VARIABLE FORTRAN_COMPILER) if (FORTRAN_COMPILER) SET(CMAKE_Fortran_COMPILER ${FORTRAN_COMPILER}) enable_language(Fortran) endif (FORTRAN_COMPILER) find_package(PkgConfig REQUIRED) pkg_search_module(PETSC REQUIRED PETSc) set(SOURCES src/dvm_dg1D_qff_2dV.c src/computeHermCoeff.c src/dg1DFuncs.c) include_directories(${PETSC_INCLUDE_DIRS}) include_directories(includes) add_executable(Test1 ${SOURCES}) target_link_libraries(Test1 ${PETSC_LDFLAGS}) From yann.jobic at univ-amu.fr Wed Mar 11 06:24:12 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Wed, 11 Mar 2020 12:24:12 +0100 Subject: [petsc-users] cmake, pkg-config and Libs.private In-Reply-To: <2decd489-891b-4623-b8b0-20c3196458f5@univ-amu.fr> References: <2decd489-891b-4623-b8b0-20c3196458f5@univ-amu.fr> Message-ID: <7404914b-2dab-c4c2-1959-3b6d0d668b2b@univ-amu.fr> Hi all, I solved my problem, but i don't know if it's the right way to do it. Libs.private is used in case of static linking. So I added: execute_process ( COMMAND pkg-config PETSc --libs --static OUTPUT_VARIABLE STATIC_LIBS) string(STRIP ${STATIC_LIBS} STATIC_LIBS) message("Libs static : " ${STATIC_LIBS}) And it works. Is it the right way to do it ? Thanks, Best regards, Yann Le 11/03/2020 ? 11:59, Yann Jobic a ?crit?: > Hi all, > > I'm trying to create a correct CMakeLists.txt in order to compile a > petsc program. > I can compile my code, but i cannot link it. > > I have this link command : > /local/lib/openmpi/gcc8/4.0.3/bin/mpicc -g? -rdynamic > CMakeFiles/Test1.dir/src/dvm_dg1D_qff_2dV.c.o > CMakeFiles/Test1.dir/src/computeHermCoeff.c.o > CMakeFiles/Test1.dir/src/dg1DFuncs.c.o? -o Test1 > -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib -lpetscts > -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetscsys > /bin/ld: > /local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib/libpetscsys.a(dlimpl.o): > r?f?rence au symbole non d?fini ??dlclose@@GLIBC_2.2.5?? > //usr/lib64/libdl.so.2?: erreur lors de l'ajout de symboles?: DSO > manquant dans la ligne de commande > > But if i add Libs.private to this line in a terminal to test the > command, it works: > mpicc?? -rdynamic CMakeFiles/Test1.dir/src/dvm_dg1D_qff_2dV.c.o > CMakeFiles/Test1.dir/src/computeHermCoeff.c.o > CMakeFiles/Test1.dir/src/dg1DFuncs.c.o? -o Test1 > -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib -lpetscts > -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetscsys > -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib > -L/local/lib/openmpi/gcc8/4.0.3/lib -L/usr/lib/gcc/x86_64-redhat-linux/8 > -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord > -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd > -lamd -lsuitesparseconfig -lsuperlu -lsuperlu_dist -lml -lp4est -lsc > -lflapack -lfblas -lptesmumps -lptscotchparmetis -lptscotch > -lptscotcherr -lesmumps -lscotch -lscotcherr -lnetcdf -lhdf5hl_fortran > -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -ltriangle -lm -lz > -lsz -lX11 -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr > -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath > -lpthread -lrt -lquadmath -lstdc++ -ldl > > So I should be using Libs.private in the link command, but i don't know > how to tell cmake to do so. > > What is the correct line that i have to add to CMakeLists.txt in order > to have Libs and Libs.private from the PETSc.pc file ? > What I am doing wrong ? > > Thanks, > > Best regards, > > Yann > > PS : My CMakeLists.txt so far : > cmake_minimum_required(VERSION 2.8.11) > > list (APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/cmake-modules) > > set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE) > > # set root of location to find PETSc configuration > set(PETSC $ENV{PETSC_DIR}/$ENV{PETSC_ARCH}) > set(ENV{PKG_CONFIG_PATH} ${PETSC}/lib/pkgconfig) > > # Determine PETSc compilers > execute_process ( COMMAND pkg-config PETSc --variable=ccompiler COMMAND > tr -d '\n' OUTPUT_VARIABLE C_COMPILER) > SET(CMAKE_C_COMPILER ${C_COMPILER}) > ? execute_process ( COMMAND pkg-config PETSc --variable=cxxcompiler > COMMAND tr -d '\n' OUTPUT_VARIABLE CXX_COMPILER) > if (CXX_COMPILER) > ? SET(CMAKE_CXX_COMPILER ${CXX_COMPILER}) > endif (CXX_COMPILER) > execute_process ( COMMAND pkg-config PETSc --variable=fcompiler COMMAND > tr -d '\n' OUTPUT_VARIABLE FORTRAN_COMPILER) > if (FORTRAN_COMPILER) > ? SET(CMAKE_Fortran_COMPILER ${FORTRAN_COMPILER}) > ? enable_language(Fortran) > endif (FORTRAN_COMPILER) > > find_package(PkgConfig REQUIRED) > pkg_search_module(PETSC REQUIRED PETSc) > > set(SOURCES src/dvm_dg1D_qff_2dV.c src/computeHermCoeff.c src/dg1DFuncs.c) > include_directories(${PETSC_INCLUDE_DIRS}) > include_directories(includes) > > add_executable(Test1 ${SOURCES}) > target_link_libraries(Test1 ${PETSC_LDFLAGS}) From jed at jedbrown.org Wed Mar 11 10:20:27 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 11 Mar 2020 09:20:27 -0600 Subject: [petsc-users] cmake, pkg-config and Libs.private In-Reply-To: <7404914b-2dab-c4c2-1959-3b6d0d668b2b@univ-amu.fr> References: <2decd489-891b-4623-b8b0-20c3196458f5@univ-amu.fr> <7404914b-2dab-c4c2-1959-3b6d0d668b2b@univ-amu.fr> Message-ID: <87a74mrk90.fsf@jedbrown.org> Please check out these resources. https://cmake.org/cmake/help/latest/module/FindPkgConfig.html https://gitlab.com/petsc/petsc/-/merge_requests/2367 Yann Jobic writes: > Hi all, > > I solved my problem, but i don't know if it's the right way to do it. > Libs.private is used in case of static linking. So I added: > execute_process ( COMMAND pkg-config PETSc --libs --static > OUTPUT_VARIABLE STATIC_LIBS) > string(STRIP ${STATIC_LIBS} STATIC_LIBS) > message("Libs static : " ${STATIC_LIBS}) > > And it works. Is it the right way to do it ? > > Thanks, > > Best regards, > > Yann > > > Le 11/03/2020 ? 11:59, Yann Jobic a ?crit?: >> Hi all, >> >> I'm trying to create a correct CMakeLists.txt in order to compile a >> petsc program. >> I can compile my code, but i cannot link it. >> >> I have this link command : >> /local/lib/openmpi/gcc8/4.0.3/bin/mpicc -g? -rdynamic >> CMakeFiles/Test1.dir/src/dvm_dg1D_qff_2dV.c.o >> CMakeFiles/Test1.dir/src/computeHermCoeff.c.o >> CMakeFiles/Test1.dir/src/dg1DFuncs.c.o? -o Test1 >> -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib -lpetscts >> -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetscsys >> /bin/ld: >> /local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib/libpetscsys.a(dlimpl.o): >> r?f?rence au symbole non d?fini ??dlclose@@GLIBC_2.2.5?? >> //usr/lib64/libdl.so.2?: erreur lors de l'ajout de symboles?: DSO >> manquant dans la ligne de commande >> >> But if i add Libs.private to this line in a terminal to test the >> command, it works: >> mpicc?? -rdynamic CMakeFiles/Test1.dir/src/dvm_dg1D_qff_2dV.c.o >> CMakeFiles/Test1.dir/src/computeHermCoeff.c.o >> CMakeFiles/Test1.dir/src/dg1DFuncs.c.o? -o Test1 >> -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib -lpetscts >> -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetscsys >> -L/local/lib/petsc/3.12/p4/gcc_8.3.1/openmpi-gcc-opti/lib >> -L/local/lib/openmpi/gcc8/4.0.3/lib -L/usr/lib/gcc/x86_64-redhat-linux/8 >> -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord >> -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd >> -lamd -lsuitesparseconfig -lsuperlu -lsuperlu_dist -lml -lp4est -lsc >> -lflapack -lfblas -lptesmumps -lptscotchparmetis -lptscotch >> -lptscotcherr -lesmumps -lscotch -lscotcherr -lnetcdf -lhdf5hl_fortran >> -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -ltriangle -lm -lz >> -lsz -lX11 -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr >> -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath >> -lpthread -lrt -lquadmath -lstdc++ -ldl >> >> So I should be using Libs.private in the link command, but i don't know >> how to tell cmake to do so. >> >> What is the correct line that i have to add to CMakeLists.txt in order >> to have Libs and Libs.private from the PETSc.pc file ? >> What I am doing wrong ? >> >> Thanks, >> >> Best regards, >> >> Yann >> >> PS : My CMakeLists.txt so far : >> cmake_minimum_required(VERSION 2.8.11) >> >> list (APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/cmake-modules) >> >> set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE) >> >> # set root of location to find PETSc configuration >> set(PETSC $ENV{PETSC_DIR}/$ENV{PETSC_ARCH}) >> set(ENV{PKG_CONFIG_PATH} ${PETSC}/lib/pkgconfig) >> >> # Determine PETSc compilers >> execute_process ( COMMAND pkg-config PETSc --variable=ccompiler COMMAND >> tr -d '\n' OUTPUT_VARIABLE C_COMPILER) >> SET(CMAKE_C_COMPILER ${C_COMPILER}) >> ? execute_process ( COMMAND pkg-config PETSc --variable=cxxcompiler >> COMMAND tr -d '\n' OUTPUT_VARIABLE CXX_COMPILER) >> if (CXX_COMPILER) >> ? SET(CMAKE_CXX_COMPILER ${CXX_COMPILER}) >> endif (CXX_COMPILER) >> execute_process ( COMMAND pkg-config PETSc --variable=fcompiler COMMAND >> tr -d '\n' OUTPUT_VARIABLE FORTRAN_COMPILER) >> if (FORTRAN_COMPILER) >> ? SET(CMAKE_Fortran_COMPILER ${FORTRAN_COMPILER}) >> ? enable_language(Fortran) >> endif (FORTRAN_COMPILER) >> >> find_package(PkgConfig REQUIRED) >> pkg_search_module(PETSC REQUIRED PETSc) >> >> set(SOURCES src/dvm_dg1D_qff_2dV.c src/computeHermCoeff.c src/dg1DFuncs.c) >> include_directories(${PETSC_INCLUDE_DIRS}) >> include_directories(includes) >> >> add_executable(Test1 ${SOURCES}) >> target_link_libraries(Test1 ${PETSC_LDFLAGS}) From adantra at gmail.com Wed Mar 11 18:25:09 2020 From: adantra at gmail.com (Adolfo Rodriguez) Date: Wed, 11 Mar 2020 18:25:09 -0500 Subject: [petsc-users] error destroying solver context Message-ID: I have a situation with a c++ code where I get an error when destroying the solver context after destroying the matrix. I have the following lines at the end of my function KSPDestroy(&ksp); MatDestroy(&A); PetscObjectDestroy((PetscObject*)&x); PetscObjectDestroy((PetscObject*)&b); It is a very simple program, very similar to the /ksp/ksp/ex1.c example. I can destroy the solver context (ksp) or the matrix (A) but not both. Does anybody have a clue? Adolfo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 11 20:07:00 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Mar 2020 21:07:00 -0400 Subject: [petsc-users] error destroying solver context In-Reply-To: References: Message-ID: On Wed, Mar 11, 2020 at 7:26 PM Adolfo Rodriguez wrote: > I have a situation with a c++ code where I get an error when destroying > the solver context after destroying the matrix. I have the following lines > at the end of my function > > KSPDestroy(&ksp); > MatDestroy(&A); > PetscObjectDestroy((PetscObject*)&x); > PetscObjectDestroy((PetscObject*)&b); > > It is a very simple program, very similar to the /ksp/ksp/ex1.c example. > > I can destroy the solver context (ksp) or the matrix (A) but not both. > Does anybody have a clue? > Please send the whole program. Thanks, Matt > Adolfo > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From berend.vanwachem at ovgu.de Thu Mar 12 06:39:55 2020 From: berend.vanwachem at ovgu.de (Berend van Wachem) Date: Thu, 12 Mar 2020 12:39:55 +0100 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections Message-ID: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Dear All, I have started to use DMPLEX with P4EST for a computational fluid dynamics application.?I am solving a coupled system of 4 discretised equations (for 3 velocity components and one pressure) on a mesh. However, next to these 4 variables, I also have a few single field variables (such as density and viscosity) defined over the mesh, which I don't solve for (they should not be part of the matrix with unknowns). Most of these variables are at the cell centers, but in a few cases, it want to define them at cell faces. With just DMPLEX, I solve this by: DMPlexCreateMesh, so I get an initial DM DMPlexCreateSection, indicating the need for 4 variables DMSetLocalSection DMCreateGlobalVector (and Matrix), so I get an Unknown vector, a RHS vector, and a matrix for the 4 variables. To get a vector for a single variable at the cell center or the cell face, I clone the original DM, I define a new Section on it, and then create the vector from that which I need (e.g. for density, viscosity or a velocity at the cell face). Then I loop over the mesh, and with MatSetValuesLocal, I set the coefficients. After that, I solve the system for multiple timesteps (sequential solves) and get the solution vector with the 4 variables after each solve. So-far, this works fine with DMPLEX. However, now I want to use P4EST, and I have difficulty defining a variable vector other than the original 4. I have changed the code structure: DMPlexCreateMesh, so I get an initial DM DMPlexCreateSection, indicating the need for 4 variables DMSetLocalSection DMForestSetBaseDM(DM, DMForest) to create a DMForest DMCreateGlobalVector (and Matrix), so I get a Unknown vector, a RHS vector, and a matrix for the 4 variables then I perform multiple time-steps, ?DMForestTemplate(DMForest -> ?DMForestPost) ?Adapt DMForestPost ?DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) ?DMForestTransferVec(UnknownVector , RefinedUnknownVector) ?DMForestPost -> DMForest and then DMConvert(DMForest,DMPLEX,DM) and I can solve the system as usual. That also seems to work. But my conceptual question: how can I convert the other variable vectors (obtained with a different section on the same DM) such as density and viscosity and faceVelocity within this framework? The DMForest has the same Section as the original DM and will thus have the space for exactly 4 variables per cell. I tried pushing another section on the DMForest and DMForestPost, but that does not seem to work. Please find attached a working example with code to do this, but I get the error: PETSC ERROR: PetscSectionGetChart() line 513 in /usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c Wrong type of object: Parameter # 1 So, I is there a way to DMForestTransferVec my other vectors from one DMForest to DMForestPost. How can I do this? Many thanks for your help! Best wishes, Berend. -------------- next part -------------- A non-text attachment was scrubbed... Name: example-dmplexp4est.c Type: text/x-csrc Size: 8137 bytes Desc: not available URL: From fdkong.jd at gmail.com Thu Mar 12 14:25:13 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 12 Mar 2020 13:25:13 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used Message-ID: Hi All, I had an issue when configuring petsc on a linux machine. I have the following error message: Compiling FBLASLAPACK; this may take several minutes =============================================================================== TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --download-fblaslapack libraries cannot be used ******************************************************************************* The configuration log was attached. Thanks, Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1539301 bytes Desc: not available URL: From balay at mcs.anl.gov Thu Mar 12 14:38:19 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 12 Mar 2020 14:38:19 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: For some reason - the fortran compiler libraries check worked fine without -lgfortran. But now - flbaslapack check is failing without it. To work arround - you can use option LIBS=-lgfortran Satish On Thu, 12 Mar 2020, Fande Kong wrote: > Hi All, > > I had an issue when configuring petsc on a linux machine. I have the > following error message: > > Compiling FBLASLAPACK; this may take several minutes > > =============================================================================== > > TESTING: checkLib from > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > --download-fblaslapack libraries cannot be used > ******************************************************************************* > > > The configuration log was attached. > > Thanks, > > Fande, > From balay at mcs.anl.gov Thu Mar 12 14:50:00 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 12 Mar 2020 14:50:00 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Does the attached patch make a difference? Satish On Thu, 12 Mar 2020, Satish Balay via petsc-users wrote: > For some reason - the fortran compiler libraries check worked fine without -lgfortran. > > But now - flbaslapack check is failing without it. > > To work arround - you can use option LIBS=-lgfortran > > Satish > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > Hi All, > > > > I had an issue when configuring petsc on a linux machine. I have the > > following error message: > > > > Compiling FBLASLAPACK; this may take several minutes > > > > =============================================================================== > > > > TESTING: checkLib from > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > ------------------------------------------------------------------------------- > > --download-fblaslapack libraries cannot be used > > ******************************************************************************* > > > > > > The configuration log was attached. > > > > Thanks, > > > > Fande, > > > -------------- next part -------------- diff --git a/config/BuildSystem/config/compilers.py b/config/BuildSystem/config/compilers.py index 8a854db1ca..8df2c754ee 100644 --- a/config/BuildSystem/config/compilers.py +++ b/config/BuildSystem/config/compilers.py @@ -912,9 +912,9 @@ Otherwise you need a different combination of C, C++, and Fortran compilers") cbody = "int main(int argc,char **args)\n{return 0;}\n"; self.pushLanguage('FC') if self.checkLink(includes='#include ',body=' call MPI_Allreduce()\n'): - fbody = "subroutine asub()\n print*,'testing'\n call MPI_Allreduce()\n return\n end\n" + fbody = "subroutine asub()\n integer,parameter :: idx=5\n write(6,100) idx\n 100 format('value:', i5)\n call MPI_Allreduce()\n return\n end\n" else: - fbody = "subroutine asub()\n print*,'testing'\n return\n end\n" + fbody = "subroutine asub()\n integer,parameter :: idx=5\n write(6,100) idx\n 100 format('value:', i5)\n return\n end\n" self.popLanguage() try: if self.checkCrossLink(fbody,cbody,language1='FC',language2='C'): From fdkong.jd at gmail.com Thu Mar 12 16:07:13 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 12 Mar 2020 15:07:13 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: This fixed the fblaslapack issue. Now have another issue about mumps. Please see the log file attached. Thanks, Fande, On Thu, Mar 12, 2020 at 1:38 PM Satish Balay wrote: > For some reason - the fortran compiler libraries check worked fine without > -lgfortran. > > But now - flbaslapack check is failing without it. > > To work arround - you can use option LIBS=-lgfortran > > Satish > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > Hi All, > > > > I had an issue when configuring petsc on a linux machine. I have the > > following error message: > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > =============================================================================== > > > > TESTING: checkLib from > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > > ------------------------------------------------------------------------------- > > --download-fblaslapack libraries cannot be used > > > ******************************************************************************* > > > > > > The configuration log was attached. > > > > Thanks, > > > > Fande, > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 2956248 bytes Desc: not available URL: From fdkong.jd at gmail.com Thu Mar 12 16:07:43 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 12 Mar 2020 15:07:43 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: This did not help. Made no difference. Thanks, Fande, On Thu, Mar 12, 2020 at 1:50 PM Satish Balay wrote: > Does the attached patch make a difference? > > Satish > > On Thu, 12 Mar 2020, Satish Balay via petsc-users wrote: > > > For some reason - the fortran compiler libraries check worked fine > without -lgfortran. > > > > But now - flbaslapack check is failing without it. > > > > To work arround - you can use option LIBS=-lgfortran > > > > Satish > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > Hi All, > > > > > > I had an issue when configuring petsc on a linux machine. I have the > > > following error message: > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > =============================================================================== > > > > > > TESTING: checkLib from > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for > > > details): > > > > ------------------------------------------------------------------------------- > > > --download-fblaslapack libraries cannot be used > > > > ******************************************************************************* > > > > > > > > > The configuration log was attached. > > > > > > Thanks, > > > > > > Fande, > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Thu Mar 12 16:13:28 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Thu, 12 Mar 2020 15:13:28 -0600 Subject: [petsc-users] Inquiry about the setup for multigrid as a preconditioner in Petsc. Message-ID: Hi, all, I am practising multigrid as a preconditioner in Petsc. From the previous resource, there are 2 main ways to set up the multigrid preconditioner, 1). For general cae, KSPCreate(MPI Comm comm,KSP *ksp); KSPGetPC(KSP ksp,PC *pc); PCSetType(PC pc,PCMG); PCMGSetLevels(pc,int levels,MPI Comm *comms); PCMGSetType(PC pc,PCMGType mode); PCMGSetCycleType(PC pc,PCMGCycleType ctype); ... PCMGSetInterpolation(PC pc,int level,Mat P); PCMGSetRestriction(PC pc,int level,Mat R); The above means that I need to specify a lot details, e.g., cycletype. interpolation and restriction matrix, coarse solver, etc. 2) For the case of structured mesh, (DMDA is enough) Taking the following case as an example, https://www.mcs.anl.gov/petsc/petsc-3.6/src/ksp/ksp/examples/tutorials/ex25.c.html 50: KSPCreate(PETSC_COMM_WORLD,&ksp); 51: DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,-3,1,1,0,&da); 52: KSPSetDM(ksp,da); 53: KSPSetComputeRHS(ksp,ComputeRHS,&user); 54: KSPSetComputeOperators(ksp,ComputeMatrix,&user); 55: KSPSetFromOptions(ksp); 56: KSPSolve(ksp,NULL,NULL); DMDA handles all the multigrid setting automatically, e.g., interpolation and restriction matrix. If my understanding is right, *my question is where to find these source file to define these default interpolation and restriction matrix. * Thanks,S Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Mar 12 16:22:06 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 12 Mar 2020 21:22:06 +0000 Subject: [petsc-users] Inquiry about the setup for multigrid as a preconditioner in Petsc. In-Reply-To: References: Message-ID: You want to look at the bottom of each of these web pages https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateInjection.html https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateInterpolation.html https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMCreateInterpolationScale.html At the bottom you will see URLs to the current set of DM implementations which implement Injection, Interpolation. Thanks Dave On Thu, 12 Mar 2020 at 21:14, Xiaodong Liu wrote: > Hi, all, > > I am practising multigrid as a preconditioner in Petsc. From the previous > resource, there are 2 main ways to set up the multigrid preconditioner, > > 1). For general cae, > KSPCreate(MPI Comm comm,KSP *ksp); > KSPGetPC(KSP ksp,PC *pc); > PCSetType(PC pc,PCMG); > PCMGSetLevels(pc,int levels,MPI Comm *comms); > PCMGSetType(PC pc,PCMGType mode); > PCMGSetCycleType(PC pc,PCMGCycleType ctype); > ... > PCMGSetInterpolation(PC pc,int level,Mat P); > PCMGSetRestriction(PC pc,int level,Mat R); > > The above means that I need to specify a lot details, e.g., cycletype. > interpolation and restriction matrix, coarse solver, etc. > > 2) For the case of structured mesh, (DMDA is enough) > Taking the following case as an example, > > https://www.mcs.anl.gov/petsc/petsc-3.6/src/ksp/ksp/examples/tutorials/ex25.c.html > > 50: KSPCreate(PETSC_COMM_WORLD,&ksp); > 51: DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,-3,1,1,0,&da); > 52: KSPSetDM(ksp,da); > 53: KSPSetComputeRHS(ksp,ComputeRHS,&user); > 54: KSPSetComputeOperators(ksp,ComputeMatrix,&user); > 55: KSPSetFromOptions(ksp); > 56: KSPSolve(ksp,NULL,NULL); > > DMDA handles all the multigrid setting automatically, e.g., interpolation > and restriction matrix. > If my understanding is right, *my question is where to find these source > file to define these default interpolation and restriction matrix. * > > Thanks,S > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Mar 12 16:42:54 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 12 Mar 2020 16:42:54 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Can you retry with the attached patch? BTW: Its best to use the latest patched version - i.e petsc-3.12.4.tar.gz Satish On Thu, 12 Mar 2020, Fande Kong wrote: > This fixed the fblaslapack issue. Now have another issue about mumps. > > Please see the log file attached. > > Thanks, > > Fande, > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay wrote: > > > For some reason - the fortran compiler libraries check worked fine without > > -lgfortran. > > > > But now - flbaslapack check is failing without it. > > > > To work arround - you can use option LIBS=-lgfortran > > > > Satish > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > Hi All, > > > > > > I had an issue when configuring petsc on a linux machine. I have the > > > following error message: > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > =============================================================================== > > > > > > TESTING: checkLib from > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > details): > > > > > ------------------------------------------------------------------------------- > > > --download-fblaslapack libraries cannot be used > > > > > ******************************************************************************* > > > > > > > > > The configuration log was attached. > > > > > > Thanks, > > > > > > Fande, > > > > > > > > -------------- next part -------------- diff --git a/config/BuildSystem/config/compilers.py b/config/BuildSystem/config/compilers.py index 8a854db1ca..64de7bd7a5 100644 --- a/config/BuildSystem/config/compilers.py +++ b/config/BuildSystem/config/compilers.py @@ -909,12 +909,13 @@ Otherwise you need a different combination of C, C++, and Fortran compilers") return skipfortranlibraries = 1 self.setCompilers.saveLog() - cbody = "int main(int argc,char **args)\n{return 0;}\n"; + asub=self.mangleFortranFunction("asub") + cbody = "extern void "+asub+"(void);\nint main(int argc,char **args)\n{\n "+asub+"();\n return 0;\n}\n"; self.pushLanguage('FC') if self.checkLink(includes='#include ',body=' call MPI_Allreduce()\n'): - fbody = "subroutine asub()\n print*,'testing'\n call MPI_Allreduce()\n return\n end\n" + fbody = " subroutine asub()\n print*,'testing'\n call MPI_Allreduce()\n return\n end\n" else: - fbody = "subroutine asub()\n print*,'testing'\n return\n end\n" + fbody = " subroutine asub()\n print*,'testing'\n return\n end\n" self.popLanguage() try: if self.checkCrossLink(fbody,cbody,language1='FC',language2='C'): From xliu29 at ncsu.edu Thu Mar 12 17:28:00 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Thu, 12 Mar 2020 16:28:00 -0600 Subject: [petsc-users] Inquiry about the setup for multigrid as a preconditioner in Petsc. In-Reply-To: References: Message-ID: Thanks. It is very helpful to make me understand the whole process. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Thu, Mar 12, 2020 at 3:22 PM Dave May wrote: > You want to look at the bottom of each of these web pages > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateInjection.html > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateInterpolation.html > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMCreateInterpolationScale.html > > At the bottom you will see URLs to the current set of DM implementations > which implement Injection, Interpolation. > > Thanks > Dave > > On Thu, 12 Mar 2020 at 21:14, Xiaodong Liu wrote: > >> Hi, all, >> >> I am practising multigrid as a preconditioner in Petsc. From the previous >> resource, there are 2 main ways to set up the multigrid preconditioner, >> >> 1). For general cae, >> KSPCreate(MPI Comm comm,KSP *ksp); >> KSPGetPC(KSP ksp,PC *pc); >> PCSetType(PC pc,PCMG); >> PCMGSetLevels(pc,int levels,MPI Comm *comms); >> PCMGSetType(PC pc,PCMGType mode); >> PCMGSetCycleType(PC pc,PCMGCycleType ctype); >> ... >> PCMGSetInterpolation(PC pc,int level,Mat P); >> PCMGSetRestriction(PC pc,int level,Mat R); >> >> The above means that I need to specify a lot details, e.g., cycletype. >> interpolation and restriction matrix, coarse solver, etc. >> >> 2) For the case of structured mesh, (DMDA is enough) >> Taking the following case as an example, >> >> https://www.mcs.anl.gov/petsc/petsc-3.6/src/ksp/ksp/examples/tutorials/ex25.c.html >> >> 50: KSPCreate(PETSC_COMM_WORLD,&ksp); >> 51: DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,-3,1,1,0,&da); >> 52: KSPSetDM(ksp,da); >> 53: KSPSetComputeRHS(ksp,ComputeRHS,&user); >> 54: KSPSetComputeOperators(ksp,ComputeMatrix,&user); >> 55: KSPSetFromOptions(ksp); >> 56: KSPSolve(ksp,NULL,NULL); >> >> DMDA handles all the multigrid setting automatically, e.g., interpolation >> and restriction matrix. >> If my understanding is right, *my question is where to find these source >> file to define these default interpolation and restriction matrix. * >> >> Thanks,S >> Xiaodong Liu, PhD >> X: Computational Physics Division >> Los Alamos National Laboratory >> P.O. Box 1663, >> Los Alamos, NM 87544 >> 505-709-0534 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 12 18:19:14 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 12 Mar 2020 19:19:14 -0400 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem wrote: > Dear All, > > I have started to use DMPLEX with P4EST for a computational fluid > dynamics application. I am solving a coupled system of 4 discretised > equations (for 3 velocity components and one pressure) on a mesh. > However, next to these 4 variables, I also have a few single field > variables (such as density and viscosity) defined over the mesh, which I > don't solve for (they should not be part of the matrix with unknowns). > Most of these variables are at the cell centers, but in a few cases, it > want to define them at cell faces. > > With just DMPLEX, I solve this by: > > DMPlexCreateMesh, so I get an initial DM > DMPlexCreateSection, indicating the need for 4 variables > DMSetLocalSection > DMCreateGlobalVector (and Matrix), so I get an Unknown vector, a RHS > vector, and a matrix for the 4 variables. > > To get a vector for a single variable at the cell center or the cell > face, I clone the original DM, I define a new Section on it, and then > create the vector from that which I need (e.g. for density, viscosity or > a velocity at the cell face). > > Then I loop over the mesh, and with MatSetValuesLocal, I set the > coefficients. After that, I solve the system for multiple timesteps > (sequential solves) and get the solution vector with the 4 variables > after each solve. > > So-far, this works fine with DMPLEX. However, now I want to use P4EST, > and I have difficulty defining a variable vector other than the original 4. > > I have changed the code structure: > > DMPlexCreateMesh, so I get an initial DM > DMPlexCreateSection, indicating the need for 4 variables > DMSetLocalSection > DMForestSetBaseDM(DM, DMForest) to create a DMForest > DMCreateGlobalVector (and Matrix), so I get a Unknown vector, a RHS > vector, and a matrix for the 4 variables > > then I perform multiple time-steps, > DMForestTemplate(DMForest -> DMForestPost) > Adapt DMForestPost > DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > DMForestTransferVec(UnknownVector , RefinedUnknownVector) > DMForestPost -> DMForest > and then DMConvert(DMForest,DMPLEX,DM) > and I can solve the system as usual. That also seems to work. > > But my conceptual question: how can I convert the other variable vectors > (obtained with a different section on the same DM) such as density and > viscosity and faceVelocity within this framework? > Here is my current thinking about DMs. A DM is a function space overlaying a topology. Much to my dismay, we do not have a topology object, so it hides inside DM. DMClone() creates a shallow copy of the topology. We use this to have any number of data layouts through PetscSection, laying over the same underlying topology. So for each layout you have, make a separate clone. Then things like TransferVec() will respond to the layout in that clone. Certainly it works this way in Plex. I admit to not having tried this for TransferVec(), but let me know if you have any problems. BTW, I usually use a dm for the solution, which I give to the solver, say SNESSetDM(snes, dm), and then clone it as dmAux which has the layout for all the auxiliary fields that are not involved in the solve. The Plex examples all use this form. Thanks, Matt > The DMForest has the same Section as the original DM and will thus have > the space for exactly 4 variables per cell. I tried pushing another > section on the DMForest and DMForestPost, but that does not seem to > work. Please find attached a working example with code to do this, but I > get the error: > > PETSC ERROR: PetscSectionGetChart() line 513 in > /usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c Wrong > type of object: Parameter # 1 > > So, I is there a way to DMForestTransferVec my other vectors from one > DMForest to DMForestPost. How can I do this? > > Many thanks for your help! > > Best wishes, Berend. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Thu Mar 12 18:41:05 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 12 Mar 2020 17:41:05 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Thanks, Satish, But still have the problem. Please see the attached log file. Thanks, Fande. On Thu, Mar 12, 2020 at 3:42 PM Satish Balay wrote: > Can you retry with the attached patch? > > BTW: Its best to use the latest patched version - i.e petsc-3.12.4.tar.gz > > Satish > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > This fixed the fblaslapack issue. Now have another issue about mumps. > > > > Please see the log file attached. > > > > Thanks, > > > > Fande, > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay wrote: > > > > > For some reason - the fortran compiler libraries check worked fine > without > > > -lgfortran. > > > > > > But now - flbaslapack check is failing without it. > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > Satish > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > Hi All, > > > > > > > > I had an issue when configuring petsc on a linux machine. I have the > > > > following error message: > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > =============================================================================== > > > > > > > > TESTING: checkLib from > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > ******************************************************************************* > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > configure.log for > > > > details): > > > > > > > > ------------------------------------------------------------------------------- > > > > --download-fblaslapack libraries cannot be used > > > > > > > > ******************************************************************************* > > > > > > > > > > > > The configuration log was attached. > > > > > > > > Thanks, > > > > > > > > Fande, > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 159735 bytes Desc: not available URL: From mfadams at lbl.gov Thu Mar 12 19:15:51 2020 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 12 Mar 2020 20:15:51 -0400 Subject: [petsc-users] make error in stefanozampini/hypre-cuda-rebased-v2 Message-ID: I get this make error in stefanozampini/hypre-cuda-rebased-v2 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 115844 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1719261 bytes Desc: not available URL: From balay at mcs.anl.gov Thu Mar 12 20:06:18 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 12 Mar 2020 20:06:18 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: I can't figure out what the stack in the attached configure.log. [likely some stuff isn't getting logged in it] Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? Satish On Thu, 12 Mar 2020, Fande Kong wrote: > Thanks, Satish, > > But still have the problem. Please see the attached log file. > > Thanks, > > Fande. > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay wrote: > > > Can you retry with the attached patch? > > > > BTW: Its best to use the latest patched version - i.e petsc-3.12.4.tar.gz > > > > Satish > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > This fixed the fblaslapack issue. Now have another issue about mumps. > > > > > > Please see the log file attached. > > > > > > Thanks, > > > > > > Fande, > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay wrote: > > > > > > > For some reason - the fortran compiler libraries check worked fine > > without > > > > -lgfortran. > > > > > > > > But now - flbaslapack check is failing without it. > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > > > Satish > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > Hi All, > > > > > > > > > > I had an issue when configuring petsc on a linux machine. I have the > > > > > following error message: > > > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > > > > > =============================================================================== > > > > > > > > > > TESTING: checkLib from > > > > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > configure.log for > > > > > details): > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > --download-fblaslapack libraries cannot be used > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > The configuration log was attached. > > > > > > > > > > Thanks, > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Thu Mar 12 20:26:43 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 12 Mar 2020 20:26:43 -0500 Subject: [petsc-users] make error in stefanozampini/hypre-cuda-rebased-v2 In-Reply-To: References: Message-ID: >>>>>>>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(334): error: function "atomicMin(long long *, long long)" has already been defined /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(345): error: function "atomicMax(long long *, long long)" has already been defined /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(413): error: function "atomicAnd(long long *, long long)" has already been defined /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(414): error: function "atomicOr(long long *, long long)" has already been defined /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(415): error: function "atomicXor(long long *, long long)" has already been defined <<<<<< This is a strange error. For one the compiler is not showing where the prior definition is. And the prior definition I see is 'double atomicMin(double* address, double val)' which shouldn't conflict - as this is c++? Satish On Thu, 12 Mar 2020, Mark Adams wrote: > I get this make error in stefanozampini/hypre-cuda-rebased-v2 > From sajidsyed2021 at u.northwestern.edu Thu Mar 12 20:31:43 2020 From: sajidsyed2021 at u.northwestern.edu (Sajid Ali) Date: Thu, 12 Mar 2020 20:31:43 -0500 Subject: [petsc-users] Questions about TSAdjoint for time dependent parameters In-Reply-To: <918151C4-5C6E-4AF2-A540-854AB6B1AC32@anl.gov> References: <918151C4-5C6E-4AF2-A540-854AB6B1AC32@anl.gov> Message-ID: Hi Hong, For the optimal control example, the cost function has an integral term which necessitates the setup of a sub-TS quadrature. The Jacobian with respect to parameter, (henceforth denoted by Jacp) has dimensions that depend upon the number of steps that the TS integrates for. I'm trying to implement a simpler case where the cost function doesn't have an integral term but the parameters are still time dependent. For this, I modified the standard Van der Pol example (ex20adj.c) to make mu a time dependent parameter (though it has the same value at all points in time and I also made the initial conditions & params independent). Since the structure of Jacp doesn't depend on time (i.e. it is the same at all points in time, the structure being identical to the time-independent case), is it necessary that I create a Jacp matrix size whose dimensions are [dimensions of time-independent Jacp] * -ts_max_steps ? Keeping Jacp dimensions the same as dimensions of time-independent Jacp causes the program to crash (possibly due to the fact that Jacp and adjoint vector can't be multiplied). Ideally, it would be nice to have a Jacp analog of TSRHSJacobianSetReuse whereby I specify the Jacp routine once and TS knows how to reuse that at all times. Is this possible with the current petsc-master ? Another question I have is regarding exclusive calculation of one adjoint. If I'm not interested in adjoint with respect to initial conditions, can I ask TSAdjoing to not calculate that ? Setting the initialization for adjoint vector with respect to initial conditions to be NULL in TSSetCostGradients doesn't work. Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Mar 12 21:34:25 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 12 Mar 2020 21:34:25 -0500 Subject: [petsc-users] make error in stefanozampini/hypre-cuda-rebased-v2 In-Reply-To: References: Message-ID: And I tried this build [cuda+pgi] on a local linux box - and don't see this issue. So I'm not sure what to suggest. Satish --------- balay at compute-386-01:/scratch/balay/petsc$ ./configure --with-cuda=1 --with-64-bit-indices=1 --download-hypre-configure-arguments=HYPRE_CUDA_SM=70 --download-metis -download-parmetis --download-hypre CC=pgcc FC=pgf90 CXX=pgc++ --download-mpich CUDAFLAGS="-ccbin pgc++" nvcc -c -ccbin pgc++ -Xcompiler -fPIC -g -I/scratch/balay/petsc/arch-linux2-c-debug/include -Wno-deprecated-gpu-targets --compiler-options="-g -I/scratch/balay/petsc/include -I/scratch/balay/petsc/arch-linux2-c-debug/include -I/nfs/gce/software/custom/linux-ubuntu18.04-x86_64/cuda/10.2/include " /scratch/balay/petsc/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu -o arch-linux2-c-debug/obj/vec/is/sf/impls/basic/cuda/sfpackcuda.o # Compile first so that if there is an error, it comes from a normal compile On Thu, 12 Mar 2020, Satish Balay via petsc-users wrote: > >>>>>>>> > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(334): error: function "atomicMin(long long *, long long)" has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(345): error: function "atomicMax(long long *, long long)" has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(413): error: function "atomicAnd(long long *, long long)" has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(414): error: function "atomicOr(long long *, long long)" has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/sfpackcuda.cu(415): error: function "atomicXor(long long *, long long)" has already been defined > <<<<<< > > This is a strange error. For one the compiler is not showing where the prior definition is. > > And the prior definition I see is 'double atomicMin(double* address, double val)' which shouldn't conflict - as this is c++? > > Satish > > On Thu, 12 Mar 2020, Mark Adams wrote: > > > I get this make error in stefanozampini/hypre-cuda-rebased-v2 > > > From mfadams at lbl.gov Fri Mar 13 08:17:40 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 13 Mar 2020 09:17:40 -0400 Subject: [petsc-users] make error in stefanozampini/hypre-cuda-rebased-v2 In-Reply-To: References: Message-ID: This error goes away with 32 bit integers. These methods are part of CUDA C++ (https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html) and I see unsigned long long int versions, but not signed ones that are in PETSc. My cuda is not that new: 08:43 2 stefanozampini/hypre-cuda-rebased-v2= ~/petsc_install$ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:52_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243 Junchao, do you have any ideas? I did try just removing these methods and then it complained that they were not there. Thanks, Mark On Thu, Mar 12, 2020 at 9:26 PM Satish Balay wrote: > >>>>>>>> > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ > sfpackcuda.cu(334): error: function "atomicMin(long long *, long long)" > has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ > sfpackcuda.cu(345): error: function "atomicMax(long long *, long long)" > has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ > sfpackcuda.cu(413): error: function "atomicAnd(long long *, long long)" > has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ > sfpackcuda.cu(414): error: function "atomicOr(long long *, long long)" > has already been defined > > /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ > sfpackcuda.cu(415): error: function "atomicXor(long long *, long long)" > has already been defined > <<<<<< > > This is a strange error. For one the compiler is not showing where the > prior definition is. > > And the prior definition I see is 'double atomicMin(double* address, > double val)' which shouldn't conflict - as this is c++? > > Satish > > On Thu, 12 Mar 2020, Mark Adams wrote: > > > I get this make error in stefanozampini/hypre-cuda-rebased-v2 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berend.vanwachem at ovgu.de Fri Mar 13 08:45:20 2020 From: berend.vanwachem at ovgu.de (Berend van Wachem) Date: Fri, 13 Mar 2020 14:45:20 +0100 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: Dear Matt, Thanks for your response. My understanding of the DM and DMClone is the same - and I have tested this with a DMPLEX DM without problems. However, for some reason, I cannot change/set the section of a P4EST dm. In the attached example code, I get an error in line 140, where I try to create a new section from the cloned P4EST DM. Is it not possible to create/set a section on a P4EST DM? Or maybe I am doing something else wrong? Do you suggest a workaround? Many thanks, Berend. On 2020-03-13 00:19, Matthew Knepley wrote: > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem > > wrote: > > Dear All, > > I have started to use DMPLEX with P4EST for a computational fluid > dynamics application.?I am solving a coupled system of 4 discretised > equations (for 3 velocity components and one pressure) on a mesh. > However, next to these 4 variables, I also have a few single field > variables (such as density and viscosity) defined over the mesh, > which I > don't solve for (they should not be part of the matrix with unknowns). > Most of these variables are at the cell centers, but in a few cases, it > want to define them at cell faces. > > With just DMPLEX, I solve this by: > > DMPlexCreateMesh, so I get an initial DM > DMPlexCreateSection, indicating the need for 4 variables > DMSetLocalSection > DMCreateGlobalVector (and Matrix), so I get an Unknown vector, a RHS > vector, and a matrix for the 4 variables. > > To get a vector for a single variable at the cell center or the cell > face, I clone the original DM, I define a new Section on it, and then > create the vector from that which I need (e.g. for density, > viscosity or > a velocity at the cell face). > > Then I loop over the mesh, and with MatSetValuesLocal, I set the > coefficients. After that, I solve the system for multiple timesteps > (sequential solves) and get the solution vector with the 4 variables > after each solve. > > So-far, this works fine with DMPLEX. However, now I want to use P4EST, > and I have difficulty defining a variable vector other than the > original 4. > > I have changed the code structure: > > DMPlexCreateMesh, so I get an initial DM > DMPlexCreateSection, indicating the need for 4 variables > DMSetLocalSection > DMForestSetBaseDM(DM, DMForest) to create a DMForest > DMCreateGlobalVector (and Matrix), so I get a Unknown vector, a RHS > vector, and a matrix for the 4 variables > > then I perform multiple time-steps, > ? ?DMForestTemplate(DMForest -> ?DMForestPost) > ? ?Adapt DMForestPost > ? ?DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > ? ?DMForestTransferVec(UnknownVector , RefinedUnknownVector) > ? ?DMForestPost -> DMForest > and then DMConvert(DMForest,DMPLEX,DM) > and I can solve the system as usual. That also seems to work. > > But my conceptual question: how can I convert the other variable > vectors > (obtained with a different section on the same DM) such as density and > viscosity and faceVelocity within this framework? > > > Here is my current thinking about DMs. A DM is a function space > overlaying a topology. Much to my dismay, we > do not have a topology object, so it hides inside DM. DMClone() creates > a shallow copy of the topology. We use > this to have any number of data layouts through PetscSection, laying > over the same underlying topology. > > So for each layout you have, make a separate clone. Then things like > TransferVec() will respond to the layout in > that clone. Certainly it works this way in Plex. I admit to not having > tried this for TransferVec(), but let me know if > you have any problems. > > BTW, I usually use a dm for the solution, which I give to the solver, > say SNESSetDM(snes, dm), and then clone > it as dmAux which has the layout for all the auxiliary fields that are > not involved in the solve. The Plex examples > all use this form. > > ? Thanks, > > ? ? ?Matt > > The DMForest has the same Section as the original DM and will thus have > the space for exactly 4 variables per cell. I tried pushing another > section on the DMForest and DMForestPost, but that does not seem to > work. Please find attached a working example with code to do this, > but I > get the error: > > PETSC ERROR: PetscSectionGetChart() line 513 in > /usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c Wrong > type of object: Parameter # 1 > > So, I is there a way to DMForestTransferVec my other vectors from one > DMForest to DMForestPost. How can I do this? > > Many thanks for your help! > > Best wishes, Berend. > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- A non-text attachment was scrubbed... Name: dmplexp4est.c Type: text/x-csrc Size: 7235 bytes Desc: not available URL: From knepley at gmail.com Fri Mar 13 08:50:36 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 13 Mar 2020 09:50:36 -0400 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: On Fri, Mar 13, 2020 at 9:45 AM Berend van Wachem wrote: > Dear Matt, > > Thanks for your response. My understanding of the DM and DMClone is the > same - and I have tested this with a DMPLEX DM without problems. > > However, for some reason, I cannot change/set the section of a P4EST dm. > In the attached example code, I get an error in line 140, where I try to > create a new section from the cloned P4EST DM. Is it not possible to > create/set a section on a P4EST DM? Or maybe I am doing something else > wrong? Do you suggest a workaround? > Ah, I see. Let me check your example. Toby, is this the way p4est acts right now? Thanks, Matt > Many thanks, Berend. > > > On 2020-03-13 00:19, Matthew Knepley wrote: > > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem > > > wrote: > > > > Dear All, > > > > I have started to use DMPLEX with P4EST for a computational fluid > > dynamics application. I am solving a coupled system of 4 discretised > > equations (for 3 velocity components and one pressure) on a mesh. > > However, next to these 4 variables, I also have a few single field > > variables (such as density and viscosity) defined over the mesh, > > which I > > don't solve for (they should not be part of the matrix with > unknowns). > > Most of these variables are at the cell centers, but in a few cases, > it > > want to define them at cell faces. > > > > With just DMPLEX, I solve this by: > > > > DMPlexCreateMesh, so I get an initial DM > > DMPlexCreateSection, indicating the need for 4 variables > > DMSetLocalSection > > DMCreateGlobalVector (and Matrix), so I get an Unknown vector, a RHS > > vector, and a matrix for the 4 variables. > > > > To get a vector for a single variable at the cell center or the cell > > face, I clone the original DM, I define a new Section on it, and then > > create the vector from that which I need (e.g. for density, > > viscosity or > > a velocity at the cell face). > > > > Then I loop over the mesh, and with MatSetValuesLocal, I set the > > coefficients. After that, I solve the system for multiple timesteps > > (sequential solves) and get the solution vector with the 4 variables > > after each solve. > > > > So-far, this works fine with DMPLEX. However, now I want to use > P4EST, > > and I have difficulty defining a variable vector other than the > > original 4. > > > > I have changed the code structure: > > > > DMPlexCreateMesh, so I get an initial DM > > DMPlexCreateSection, indicating the need for 4 variables > > DMSetLocalSection > > DMForestSetBaseDM(DM, DMForest) to create a DMForest > > DMCreateGlobalVector (and Matrix), so I get a Unknown vector, a RHS > > vector, and a matrix for the 4 variables > > > > then I perform multiple time-steps, > > DMForestTemplate(DMForest -> DMForestPost) > > Adapt DMForestPost > > DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > > DMForestTransferVec(UnknownVector , RefinedUnknownVector) > > DMForestPost -> DMForest > > and then DMConvert(DMForest,DMPLEX,DM) > > and I can solve the system as usual. That also seems to work. > > > > But my conceptual question: how can I convert the other variable > > vectors > > (obtained with a different section on the same DM) such as density > and > > viscosity and faceVelocity within this framework? > > > > > > Here is my current thinking about DMs. A DM is a function space > > overlaying a topology. Much to my dismay, we > > do not have a topology object, so it hides inside DM. DMClone() creates > > a shallow copy of the topology. We use > > this to have any number of data layouts through PetscSection, laying > > over the same underlying topology. > > > > So for each layout you have, make a separate clone. Then things like > > TransferVec() will respond to the layout in > > that clone. Certainly it works this way in Plex. I admit to not having > > tried this for TransferVec(), but let me know if > > you have any problems. > > > > BTW, I usually use a dm for the solution, which I give to the solver, > > say SNESSetDM(snes, dm), and then clone > > it as dmAux which has the layout for all the auxiliary fields that are > > not involved in the solve. The Plex examples > > all use this form. > > > > Thanks, > > > > Matt > > > > The DMForest has the same Section as the original DM and will thus > have > > the space for exactly 4 variables per cell. I tried pushing another > > section on the DMForest and DMForestPost, but that does not seem to > > work. Please find attached a working example with code to do this, > > but I > > get the error: > > > > PETSC ERROR: PetscSectionGetChart() line 513 in > > /usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c Wrong > > type of object: Parameter # 1 > > > > So, I is there a way to DMForestTransferVec my other vectors from one > > DMForest to DMForestPost. How can I do this? > > > > Many thanks for your help! > > > > Best wishes, Berend. > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ < > http://www.cse.buffalo.edu/~knepley/> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Fri Mar 13 10:22:18 2020 From: jczhang at mcs.anl.gov (Junchao Zhang) Date: Fri, 13 Mar 2020 10:22:18 -0500 Subject: [petsc-users] make error in stefanozampini/hypre-cuda-rebased-v2 In-Reply-To: References: Message-ID: Mark, I reproduced the error on Summit with pgi compilers. I found these atomics in /sw/summit/cuda/10.1.243/include/sm_32_atomic_functions.h. I am investigating it. Thanks. --Junchao Zhang On Fri, Mar 13, 2020 at 8:17 AM Mark Adams wrote: > This error goes away with 32 bit integers. These methods are part of CUDA > C++ (https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html) > and I see unsigned long long int versions, but not signed ones that are in > PETSc. > > My cuda is not that new: > > 08:43 2 stefanozampini/hypre-cuda-rebased-v2= ~/petsc_install$ nvcc > --version > nvcc: NVIDIA (R) Cuda compiler driver > Copyright (c) 2005-2019 NVIDIA Corporation > Built on Sun_Jul_28_19:07:52_PDT_2019 > Cuda compilation tools, release 10.1, V10.1.243 > > Junchao, do you have any ideas? > > I did try just removing these methods and then it complained that they > were not there. > > Thanks, > Mark > > On Thu, Mar 12, 2020 at 9:26 PM Satish Balay wrote: > >> >>>>>>>> >> >> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >> sfpackcuda.cu(334): error: function "atomicMin(long long *, long long)" >> has already been defined >> >> >> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >> sfpackcuda.cu(345): error: function "atomicMax(long long *, long long)" >> has already been defined >> >> >> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >> sfpackcuda.cu(413): error: function "atomicAnd(long long *, long long)" >> has already been defined >> >> >> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >> sfpackcuda.cu(414): error: function "atomicOr(long long *, long long)" >> has already been defined >> >> >> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >> sfpackcuda.cu(415): error: function "atomicXor(long long *, long long)" >> has already been defined >> <<<<<< >> >> This is a strange error. For one the compiler is not showing where the >> prior definition is. >> >> And the prior definition I see is 'double atomicMin(double* address, >> double val)' which shouldn't conflict - as this is c++? >> >> Satish >> >> On Thu, 12 Mar 2020, Mark Adams wrote: >> >> > I get this make error in stefanozampini/hypre-cuda-rebased-v2 >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Fri Mar 13 11:37:27 2020 From: jczhang at mcs.anl.gov (Junchao Zhang) Date: Fri, 13 Mar 2020 11:37:27 -0500 Subject: [petsc-users] make error in stefanozampini/hypre-cuda-rebased-v2 In-Reply-To: References: Message-ID: Mark, I have a workaround and pushed to stefanozampini/hypre-cuda-rebased-v2. You can try it. Thanks. --Junchao Zhang On Fri, Mar 13, 2020 at 10:22 AM Junchao Zhang wrote: > Mark, > I reproduced the error on Summit with pgi compilers. I found these > atomics in /sw/summit/cuda/10.1.243/include/sm_32_atomic_functions.h. I am > investigating it. Thanks. > --Junchao Zhang > > > On Fri, Mar 13, 2020 at 8:17 AM Mark Adams wrote: > >> This error goes away with 32 bit integers. These methods are part of CUDA >> C++ (https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html) >> and I see unsigned long long int versions, but not signed ones that are in >> PETSc. >> >> My cuda is not that new: >> >> 08:43 2 stefanozampini/hypre-cuda-rebased-v2= ~/petsc_install$ nvcc >> --version >> nvcc: NVIDIA (R) Cuda compiler driver >> Copyright (c) 2005-2019 NVIDIA Corporation >> Built on Sun_Jul_28_19:07:52_PDT_2019 >> Cuda compilation tools, release 10.1, V10.1.243 >> >> Junchao, do you have any ideas? >> >> I did try just removing these methods and then it complained that they >> were not there. >> >> Thanks, >> Mark >> >> On Thu, Mar 12, 2020 at 9:26 PM Satish Balay wrote: >> >>> >>>>>>>> >>> >>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>> sfpackcuda.cu(334): error: function "atomicMin(long long *, long long)" >>> has already been defined >>> >>> >>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>> sfpackcuda.cu(345): error: function "atomicMax(long long *, long long)" >>> has already been defined >>> >>> >>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>> sfpackcuda.cu(413): error: function "atomicAnd(long long *, long long)" >>> has already been defined >>> >>> >>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>> sfpackcuda.cu(414): error: function "atomicOr(long long *, long long)" >>> has already been defined >>> >>> >>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>> sfpackcuda.cu(415): error: function "atomicXor(long long *, long long)" >>> has already been defined >>> <<<<<< >>> >>> This is a strange error. For one the compiler is not showing where the >>> prior definition is. >>> >>> And the prior definition I see is 'double atomicMin(double* address, >>> double val)' which shouldn't conflict - as this is c++? >>> >>> Satish >>> >>> On Thu, 12 Mar 2020, Mark Adams wrote: >>> >>> > I get this make error in stefanozampini/hypre-cuda-rebased-v2 >>> > >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Mar 13 20:24:43 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 13 Mar 2020 21:24:43 -0400 Subject: [petsc-users] make error in stefanozampini/hypre-cuda-rebased-v2 In-Reply-To: References: Message-ID: It made and past make test, thanks, On Fri, Mar 13, 2020 at 12:37 PM Junchao Zhang wrote: > Mark, I have a workaround and pushed to > stefanozampini/hypre-cuda-rebased-v2. You can try it. Thanks. > --Junchao Zhang > > > On Fri, Mar 13, 2020 at 10:22 AM Junchao Zhang > wrote: > >> Mark, >> I reproduced the error on Summit with pgi compilers. I found these >> atomics in /sw/summit/cuda/10.1.243/include/sm_32_atomic_functions.h. I am >> investigating it. Thanks. >> --Junchao Zhang >> >> >> On Fri, Mar 13, 2020 at 8:17 AM Mark Adams wrote: >> >>> This error goes away with 32 bit integers. These methods are part of >>> CUDA C++ ( >>> https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html) and I >>> see unsigned long long int versions, but not signed ones that are in PETSc. >>> >>> My cuda is not that new: >>> >>> 08:43 2 stefanozampini/hypre-cuda-rebased-v2= ~/petsc_install$ nvcc >>> --version >>> nvcc: NVIDIA (R) Cuda compiler driver >>> Copyright (c) 2005-2019 NVIDIA Corporation >>> Built on Sun_Jul_28_19:07:52_PDT_2019 >>> Cuda compilation tools, release 10.1, V10.1.243 >>> >>> Junchao, do you have any ideas? >>> >>> I did try just removing these methods and then it complained that they >>> were not there. >>> >>> Thanks, >>> Mark >>> >>> On Thu, Mar 12, 2020 at 9:26 PM Satish Balay wrote: >>> >>>> >>>>>>>> >>>> >>>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>>> sfpackcuda.cu(334): error: function "atomicMin(long long *, long >>>> long)" has already been defined >>>> >>>> >>>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>>> sfpackcuda.cu(345): error: function "atomicMax(long long *, long >>>> long)" has already been defined >>>> >>>> >>>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>>> sfpackcuda.cu(413): error: function "atomicAnd(long long *, long >>>> long)" has already been defined >>>> >>>> >>>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>>> sfpackcuda.cu(414): error: function "atomicOr(long long *, long long)" >>>> has already been defined >>>> >>>> >>>> /autofs/nccs-svm1_home1/adams/petsc_install/src/vec/is/sf/impls/basic/cuda/ >>>> sfpackcuda.cu(415): error: function "atomicXor(long long *, long >>>> long)" has already been defined >>>> <<<<<< >>>> >>>> This is a strange error. For one the compiler is not showing where the >>>> prior definition is. >>>> >>>> And the prior definition I see is 'double atomicMin(double* address, >>>> double val)' which shouldn't conflict - as this is c++? >>>> >>>> Satish >>>> >>>> On Thu, 12 Mar 2020, Mark Adams wrote: >>>> >>>> > I get this make error in stefanozampini/hypre-cuda-rebased-v2 >>>> > >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rupp at iue.tuwien.ac.at Sat Mar 14 08:39:10 2020 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Sat, 14 Mar 2020 14:39:10 +0100 Subject: [petsc-users] No PETSc User Meeting in 2020 Message-ID: Dear PETSc developers and PETSc users, due to the recent Covid-19 outbreak in Europe there will not be a PETSc User Meeting this year. We are looking into alternatives for keeping in touch with our user base, e.g. via webinars. Suggestions welcome :-) Thanks and best regards, Karl From hongzhang at anl.gov Sat Mar 14 10:31:36 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Sat, 14 Mar 2020 15:31:36 +0000 Subject: [petsc-users] Questions about TSAdjoint for time dependent parameters In-Reply-To: References: <918151C4-5C6E-4AF2-A540-854AB6B1AC32@anl.gov> Message-ID: On Mar 12, 2020, at 8:31 PM, Sajid Ali > wrote: Hi Hong, For the optimal control example, the cost function has an integral term which necessitates the setup of a sub-TS quadrature. The Jacobian with respect to parameter, (henceforth denoted by Jacp) has dimensions that depend upon the number of steps that the TS integrates for. I'm trying to implement a simpler case where the cost function doesn't have an integral term but the parameters are still time dependent. For this, I modified the standard Van der Pol example (ex20adj.c) to make mu a time dependent parameter (though it has the same value at all points in time and I also made the initial conditions & params independent). Since the structure of Jacp doesn't depend on time (i.e. it is the same at all points in time, the structure being identical to the time-independent case), is it necessary that I create a Jacp matrix size whose dimensions are [dimensions of time-independent Jacp] * -ts_max_steps ? Keeping Jacp dimensions the same as dimensions of time-independent Jacp causes the program to crash (possibly due to the fact that Jacp and adjoint vector can't be multiplied). Ideally, it would be nice to have a Jacp analog of TSRHSJacobianSetReuse whereby I specify the Jacp routine once and TS knows how to reuse that at all times. Is this possible with the current petsc-master ? When the parameters are time dependent, they have to be discretized in time. In the optimal control example, the parameter at each discrete point is treated as a new parameter. If you have np time-dependent parameters, you can consider the total number of parameters to be Np = np*the number of discrete points (which is typically the total number of time steps). And you need to create a Jacp matrix of dimension N * Np, where N is the dimension of the ODE system. Mu should have the dimension Np. This is why we can simply create Mu from the matrix with MatCreateVecs(Jacp,&Mu[0],NULL) Alternatively, you can set up the solver in the same way that you do for time constant parameters ? create a Jacp of N * np and a mu of np. Be aware that after TSAdjointSolve Mu gives only the derivative wrt the parameters at beginning time. You can access the intermediate values of Mu by using a customized TSAdjoint monitor. I think this does what want with a flag like TSRHSJacobianSetReuse. Another question I have is regarding exclusive calculation of one adjoint. If I'm not interested in adjoint with respect to initial conditions, can I ask TSAdjoing to not calculate that ? Setting the initialization for adjoint vector with respect to initial conditions to be NULL in TSSetCostGradients doesn't work. No, you cannot. Lambda is needed internally by the adjoint solver to calculate the sensitivities wrt parameters. Hong Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From Zane.Jakobs at colorado.edu Sat Mar 14 10:54:07 2020 From: Zane.Jakobs at colorado.edu (Zane Charles Jakobs) Date: Sat, 14 Mar 2020 09:54:07 -0600 Subject: [petsc-users] TS generating inconsistent data Message-ID: Hi PETSc devs, I have some code that implements (essentially) 4D-VAR with PETSc, and the results of both my forward and adjoint integrations look correct to me (i.e. calling TSSolve() and then TSAdjointSolve() works correctly as far as I can tell). However, when I try to use a TaoSolve() to optimize my initial condition, I get this error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: History id should be unique [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12 GIT Date: 2020-02-21 16:53:25 -0600 [0]PETSC ERROR: ./var_ic_test on a arch-linux2-c-debug named DiffeoInvariant by diffeoinvariant Sat Mar 14 09:39:05 2020 [0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native -fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 --with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native -fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich [0]PETSC ERROR: #1 TSHistoryUpdate() line 82 in /usr/local/petsc/src/ts/interface/tshistory.c [0]PETSC ERROR: #2 TSTrajectorySet() line 73 in /usr/local/petsc/src/ts/trajectory/interface/traj.c [0]PETSC ERROR: #3 TSSolve() line 4005 in /usr/local/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #4 MixedModelFormVARICFunctionGradient() line 301 in mixed.c [0]PETSC ERROR: #5 TaoComputeObjectiveAndGradient() line 261 in /usr/local/petsc/src/tao/interface/taosolver_fg.c [0]PETSC ERROR: #6 TaoSolve_LMVM() line 23 in /usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c [0]PETSC ERROR: #7 TaoSolve() line 219 in /usr/local/petsc/src/tao/interface/taosolver.c [0]PETSC ERROR: #8 MixedModelOptimize() line 639 in mixed.c [0]PETSC ERROR: #9 MixedModelOptimizeInitialCondition() line 648 in mixed.c [0]PETSC ERROR: #10 main() line 76 in var_ic_test.c [0]PETSC ERROR: No PETSc Option Table entries [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- In the function MixedModelFormVARICFunctionGradient(), I do ierr = TSSetTime(model->ts, 0.0);CHKERRQ(ierr); ierr = TSSetStepNumber(model->ts, 0);CHKERRQ(ierr); ierr = TSSetFromOptions(model->ts);CHKERRQ(ierr); ierr = TSSetMaxTime(model->ts, model->obs->t);CHKERRQ(ierr); ierr = TSSolve(model->ts, model->X);CHKERRQ(ierr); ... [allocating and setting cost gradient vec] ierr = TSSetCostGradients(model->ts, 1, model->lambda, NULL);CHKERRQ(ierr); ierr = TSAdjointSolve(model->ts);CHKERRQ(ierr); ierr = VecCopy(model->lambda[0], G);CHKERRQ(ierr); What might be causing the above error? Am I using a deprecated version of the Tao interface? (I'm using TaoSetObjectiveAndGradientRoutine, as done in ex20_opt_ic.c) Thanks! -Zane Jakobs -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Sat Mar 14 11:31:55 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Sat, 14 Mar 2020 19:31:55 +0300 Subject: [petsc-users] TS generating inconsistent data In-Reply-To: References: Message-ID: <4355F09D-BB11-4AD8-B51B-4C3BF8530099@gmail.com> Take a look at src/tao/unconstrained/examples/tutorials/spectraladjointassimilation.c You need to call TSResetTrajectory before calling TSSolve when reusing the same TS > On Mar 14, 2020, at 6:54 PM, Zane Charles Jakobs wrote: > > Hi PETSc devs, > > I have some code that implements (essentially) 4D-VAR with PETSc, and the results of both my forward and adjoint integrations look correct to me (i.e. calling TSSolve() and then TSAdjointSolve() works correctly as far as I can tell). However, when I try to use a TaoSolve() to optimize my initial condition, I get this error message: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: History id should be unique > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12 GIT Date: 2020-02-21 16:53:25 -0600 > [0]PETSC ERROR: ./var_ic_test on a arch-linux2-c-debug named DiffeoInvariant by diffeoinvariant Sat Mar 14 09:39:05 2020 > [0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native -fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 --with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native -fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich > [0]PETSC ERROR: #1 TSHistoryUpdate() line 82 in /usr/local/petsc/src/ts/interface/tshistory.c > [0]PETSC ERROR: #2 TSTrajectorySet() line 73 in /usr/local/petsc/src/ts/trajectory/interface/traj.c > [0]PETSC ERROR: #3 TSSolve() line 4005 in /usr/local/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #4 MixedModelFormVARICFunctionGradient() line 301 in mixed.c > [0]PETSC ERROR: #5 TaoComputeObjectiveAndGradient() line 261 in /usr/local/petsc/src/tao/interface/taosolver_fg.c > [0]PETSC ERROR: #6 TaoSolve_LMVM() line 23 in /usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c > [0]PETSC ERROR: #7 TaoSolve() line 219 in /usr/local/petsc/src/tao/interface/taosolver.c > [0]PETSC ERROR: #8 MixedModelOptimize() line 639 in mixed.c > [0]PETSC ERROR: #9 MixedModelOptimizeInitialCondition() line 648 in mixed.c > [0]PETSC ERROR: #10 main() line 76 in var_ic_test.c > [0]PETSC ERROR: No PETSc Option Table entries > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > > In the function MixedModelFormVARICFunctionGradient(), I do > > ierr = TSSetTime(model->ts, 0.0);CHKERRQ(ierr); > ierr = TSSetStepNumber(model->ts, 0);CHKERRQ(ierr); > ierr = TSSetFromOptions(model->ts);CHKERRQ(ierr); > ierr = TSSetMaxTime(model->ts, model->obs->t);CHKERRQ(ierr); > ierr = TSSolve(model->ts, model->X);CHKERRQ(ierr); > ... [allocating and setting cost gradient vec] > ierr = TSSetCostGradients(model->ts, 1, model->lambda, NULL);CHKERRQ(ierr); > ierr = TSAdjointSolve(model->ts);CHKERRQ(ierr); > ierr = VecCopy(model->lambda[0], G);CHKERRQ(ierr); > > What might be causing the above error? Am I using a deprecated version of the Tao interface? (I'm using TaoSetObjectiveAndGradientRoutine, as done in ex20_opt_ic.c) > > Thanks! > > -Zane Jakobs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Sat Mar 14 11:34:41 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Sat, 14 Mar 2020 19:34:41 +0300 Subject: [petsc-users] TS generating inconsistent data In-Reply-To: <4355F09D-BB11-4AD8-B51B-4C3BF8530099@gmail.com> References: <4355F09D-BB11-4AD8-B51B-4C3BF8530099@gmail.com> Message-ID: <0B637B32-226E-4168-B778-512021D0B8AE@gmail.com> I believe this should be automated in TSSolve when step == 0 as for other TS internal stuff. Hong? Are you ok with this? > On Mar 14, 2020, at 7:31 PM, Stefano Zampini wrote: > > Take a look at > > src/tao/unconstrained/examples/tutorials/spectraladjointassimilation.c > > You need to call TSResetTrajectory before calling TSSolve when reusing the same TS > > >> On Mar 14, 2020, at 6:54 PM, Zane Charles Jakobs > wrote: >> >> Hi PETSc devs, >> >> I have some code that implements (essentially) 4D-VAR with PETSc, and the results of both my forward and adjoint integrations look correct to me (i.e. calling TSSolve() and then TSAdjointSolve() works correctly as far as I can tell). However, when I try to use a TaoSolve() to optimize my initial condition, I get this error message: >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Petsc has generated inconsistent data >> [0]PETSC ERROR: History id should be unique >> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12 GIT Date: 2020-02-21 16:53:25 -0600 >> [0]PETSC ERROR: ./var_ic_test on a arch-linux2-c-debug named DiffeoInvariant by diffeoinvariant Sat Mar 14 09:39:05 2020 >> [0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native -fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 --with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native -fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich >> [0]PETSC ERROR: #1 TSHistoryUpdate() line 82 in /usr/local/petsc/src/ts/interface/tshistory.c >> [0]PETSC ERROR: #2 TSTrajectorySet() line 73 in /usr/local/petsc/src/ts/trajectory/interface/traj.c >> [0]PETSC ERROR: #3 TSSolve() line 4005 in /usr/local/petsc/src/ts/interface/ts.c >> [0]PETSC ERROR: #4 MixedModelFormVARICFunctionGradient() line 301 in mixed.c >> [0]PETSC ERROR: #5 TaoComputeObjectiveAndGradient() line 261 in /usr/local/petsc/src/tao/interface/taosolver_fg.c >> [0]PETSC ERROR: #6 TaoSolve_LMVM() line 23 in /usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c >> [0]PETSC ERROR: #7 TaoSolve() line 219 in /usr/local/petsc/src/tao/interface/taosolver.c >> [0]PETSC ERROR: #8 MixedModelOptimize() line 639 in mixed.c >> [0]PETSC ERROR: #9 MixedModelOptimizeInitialCondition() line 648 in mixed.c >> [0]PETSC ERROR: #10 main() line 76 in var_ic_test.c >> [0]PETSC ERROR: No PETSc Option Table entries >> [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov ---------- >> >> In the function MixedModelFormVARICFunctionGradient(), I do >> >> ierr = TSSetTime(model->ts, 0.0);CHKERRQ(ierr); >> ierr = TSSetStepNumber(model->ts, 0);CHKERRQ(ierr); >> ierr = TSSetFromOptions(model->ts);CHKERRQ(ierr); >> ierr = TSSetMaxTime(model->ts, model->obs->t);CHKERRQ(ierr); >> ierr = TSSolve(model->ts, model->X);CHKERRQ(ierr); >> ... [allocating and setting cost gradient vec] >> ierr = TSSetCostGradients(model->ts, 1, model->lambda, NULL);CHKERRQ(ierr); >> ierr = TSAdjointSolve(model->ts);CHKERRQ(ierr); >> ierr = VecCopy(model->lambda[0], G);CHKERRQ(ierr); >> >> What might be causing the above error? Am I using a deprecated version of the Tao interface? (I'm using TaoSetObjectiveAndGradientRoutine, as done in ex20_opt_ic.c) >> >> Thanks! >> >> -Zane Jakobs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Sat Mar 14 12:36:46 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Sat, 14 Mar 2020 11:36:46 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: The configuration crashed earlier than before with your changes. Please see the attached log file when using your branch. The trouble lines should be: " asub=self.mangleFortranFunction("asub") cbody = "extern void "+asub+"(void);\nint main(int argc,char **args)\n{\n "+asub+"();\n return 0;\n}\n"; " Thanks, Fande, On Thu, Mar 12, 2020 at 7:06 PM Satish Balay wrote: > I can't figure out what the stack in the attached configure.log. [likely > some stuff isn't getting logged in it] > > Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? > > Satish > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > Thanks, Satish, > > > > But still have the problem. Please see the attached log file. > > > > Thanks, > > > > Fande. > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay wrote: > > > > > Can you retry with the attached patch? > > > > > > BTW: Its best to use the latest patched version - i.e > petsc-3.12.4.tar.gz > > > > > > Satish > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > This fixed the fblaslapack issue. Now have another issue about mumps. > > > > > > > > Please see the log file attached. > > > > > > > > Thanks, > > > > > > > > Fande, > > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay > wrote: > > > > > > > > > For some reason - the fortran compiler libraries check worked fine > > > without > > > > > -lgfortran. > > > > > > > > > > But now - flbaslapack check is failing without it. > > > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > > > > > Satish > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > Hi All, > > > > > > > > > > > > I had an issue when configuring petsc on a linux machine. I have > the > > > > > > following error message: > > > > > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > > > > > > > > > > =============================================================================== > > > > > > > > > > > > TESTING: checkLib from > > > > > > > > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > configure.log for > > > > > > details): > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > --download-fblaslapack libraries cannot be used > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > > The configuration log was attached. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 143547 bytes Desc: not available URL: From balay at mcs.anl.gov Sat Mar 14 12:46:40 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 14 Mar 2020 12:46:40 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Its the same location as before. For some reason configure is not saving the relevant logs. I don't understand saveLog() restoreLog() stuff. Matt, can you check on this? Satish On Sat, 14 Mar 2020, Fande Kong wrote: > The configuration crashed earlier than before with your changes. > > Please see the attached log file when using your branch. The trouble lines > should be: > > " asub=self.mangleFortranFunction("asub") > cbody = "extern void "+asub+"(void);\nint main(int argc,char > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > " > > Thanks, > > Fande, > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay wrote: > > > I can't figure out what the stack in the attached configure.log. [likely > > some stuff isn't getting logged in it] > > > > Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? > > > > Satish > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > Thanks, Satish, > > > > > > But still have the problem. Please see the attached log file. > > > > > > Thanks, > > > > > > Fande. > > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay wrote: > > > > > > > Can you retry with the attached patch? > > > > > > > > BTW: Its best to use the latest patched version - i.e > > petsc-3.12.4.tar.gz > > > > > > > > Satish > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > This fixed the fblaslapack issue. Now have another issue about mumps. > > > > > > > > > > Please see the log file attached. > > > > > > > > > > Thanks, > > > > > > > > > > Fande, > > > > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay > > wrote: > > > > > > > > > > > For some reason - the fortran compiler libraries check worked fine > > > > without > > > > > > -lgfortran. > > > > > > > > > > > > But now - flbaslapack check is failing without it. > > > > > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > > > > > > > Satish > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > Hi All, > > > > > > > > > > > > > > I had an issue when configuring petsc on a linux machine. I have > > the > > > > > > > following error message: > > > > > > > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > > > > > > > > > > > > > > > =============================================================================== > > > > > > > > > > > > > > TESTING: checkLib from > > > > > > > > > > > > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > > configure.log for > > > > > > > details): > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > > --download-fblaslapack libraries cannot be used > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > > > > > The configuration log was attached. > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Sat Mar 14 12:57:43 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 14 Mar 2020 12:57:43 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: to work around - you can try: LIBS="-lmpifort -lgfortran" Satish On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > Its the same location as before. For some reason configure is not saving the relevant logs. > > I don't understand saveLog() restoreLog() stuff. Matt, can you check on this? > > Satish > > On Sat, 14 Mar 2020, Fande Kong wrote: > > > The configuration crashed earlier than before with your changes. > > > > Please see the attached log file when using your branch. The trouble lines > > should be: > > > > " asub=self.mangleFortranFunction("asub") > > cbody = "extern void "+asub+"(void);\nint main(int argc,char > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > > " > > > > Thanks, > > > > Fande, > > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay wrote: > > > > > I can't figure out what the stack in the attached configure.log. [likely > > > some stuff isn't getting logged in it] > > > > > > Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? > > > > > > Satish > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > Thanks, Satish, > > > > > > > > But still have the problem. Please see the attached log file. > > > > > > > > Thanks, > > > > > > > > Fande. > > > > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay wrote: > > > > > > > > > Can you retry with the attached patch? > > > > > > > > > > BTW: Its best to use the latest patched version - i.e > > > petsc-3.12.4.tar.gz > > > > > > > > > > Satish > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > This fixed the fblaslapack issue. Now have another issue about mumps. > > > > > > > > > > > > Please see the log file attached. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Fande, > > > > > > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay > > > wrote: > > > > > > > > > > > > > For some reason - the fortran compiler libraries check worked fine > > > > > without > > > > > > > -lgfortran. > > > > > > > > > > > > > > But now - flbaslapack check is failing without it. > > > > > > > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > > > Hi All, > > > > > > > > > > > > > > > > I had an issue when configuring petsc on a linux machine. I have > > > the > > > > > > > > following error message: > > > > > > > > > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =============================================================================== > > > > > > > > > > > > > > > > TESTING: checkLib from > > > > > > > > > > > > > > > > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > > > configure.log for > > > > > > > > details): > > > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > > > --download-fblaslapack libraries cannot be used > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > > > > > > > > The configuration log was attached. > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From hongzhang at anl.gov Sat Mar 14 13:50:22 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Sat, 14 Mar 2020 18:50:22 +0000 Subject: [petsc-users] TS generating inconsistent data In-Reply-To: References: Message-ID: What happens if you add -ts_trajectory_use_history 0 to your command line options? Hong On Mar 14, 2020, at 10:54 AM, Zane Charles Jakobs > wrote: Hi PETSc devs, I have some code that implements (essentially) 4D-VAR with PETSc, and the results of both my forward and adjoint integrations look correct to me (i.e. calling TSSolve() and then TSAdjointSolve() works correctly as far as I can tell). However, when I try to use a TaoSolve() to optimize my initial condition, I get this error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: History id should be unique [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12 GIT Date: 2020-02-21 16:53:25 -0600 [0]PETSC ERROR: ./var_ic_test on a arch-linux2-c-debug named DiffeoInvariant by diffeoinvariant Sat Mar 14 09:39:05 2020 [0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native -fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 --with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native -fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich [0]PETSC ERROR: #1 TSHistoryUpdate() line 82 in /usr/local/petsc/src/ts/interface/tshistory.c [0]PETSC ERROR: #2 TSTrajectorySet() line 73 in /usr/local/petsc/src/ts/trajectory/interface/traj.c [0]PETSC ERROR: #3 TSSolve() line 4005 in /usr/local/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #4 MixedModelFormVARICFunctionGradient() line 301 in mixed.c [0]PETSC ERROR: #5 TaoComputeObjectiveAndGradient() line 261 in /usr/local/petsc/src/tao/interface/taosolver_fg.c [0]PETSC ERROR: #6 TaoSolve_LMVM() line 23 in /usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c [0]PETSC ERROR: #7 TaoSolve() line 219 in /usr/local/petsc/src/tao/interface/taosolver.c [0]PETSC ERROR: #8 MixedModelOptimize() line 639 in mixed.c [0]PETSC ERROR: #9 MixedModelOptimizeInitialCondition() line 648 in mixed.c [0]PETSC ERROR: #10 main() line 76 in var_ic_test.c [0]PETSC ERROR: No PETSc Option Table entries [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- In the function MixedModelFormVARICFunctionGradient(), I do ierr = TSSetTime(model->ts, 0.0);CHKERRQ(ierr); ierr = TSSetStepNumber(model->ts, 0);CHKERRQ(ierr); ierr = TSSetFromOptions(model->ts);CHKERRQ(ierr); ierr = TSSetMaxTime(model->ts, model->obs->t);CHKERRQ(ierr); ierr = TSSolve(model->ts, model->X);CHKERRQ(ierr); ... [allocating and setting cost gradient vec] ierr = TSSetCostGradients(model->ts, 1, model->lambda, NULL);CHKERRQ(ierr); ierr = TSAdjointSolve(model->ts);CHKERRQ(ierr); ierr = VecCopy(model->lambda[0], G);CHKERRQ(ierr); What might be causing the above error? Am I using a deprecated version of the Tao interface? (I'm using TaoSetObjectiveAndGradientRoutine, as done in ex20_opt_ic.c) Thanks! -Zane Jakobs -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Sat Mar 14 13:53:00 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Sat, 14 Mar 2020 12:53:00 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: It looks like it still has the same issue. Please see the log file attached. PETSc did not pick up "-lmpifort" correctly, and PETSc seemed to just ignore this flag. Thanks, Fande, On Sat, Mar 14, 2020 at 11:57 AM Satish Balay wrote: > to work around - you can try: > > LIBS="-lmpifort -lgfortran" > > Satish > > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > Its the same location as before. For some reason configure is not saving > the relevant logs. > > > > I don't understand saveLog() restoreLog() stuff. Matt, can you check on > this? > > > > Satish > > > > On Sat, 14 Mar 2020, Fande Kong wrote: > > > > > The configuration crashed earlier than before with your changes. > > > > > > Please see the attached log file when using your branch. The trouble > lines > > > should be: > > > > > > " asub=self.mangleFortranFunction("asub") > > > cbody = "extern void "+asub+"(void);\nint main(int argc,char > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > > > " > > > > > > Thanks, > > > > > > Fande, > > > > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay > wrote: > > > > > > > I can't figure out what the stack in the attached configure.log. > [likely > > > > some stuff isn't getting logged in it] > > > > > > > > Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? > > > > > > > > Satish > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > Thanks, Satish, > > > > > > > > > > But still have the problem. Please see the attached log file. > > > > > > > > > > Thanks, > > > > > > > > > > Fande. > > > > > > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay > wrote: > > > > > > > > > > > Can you retry with the attached patch? > > > > > > > > > > > > BTW: Its best to use the latest patched version - i.e > > > > petsc-3.12.4.tar.gz > > > > > > > > > > > > Satish > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > This fixed the fblaslapack issue. Now have another issue about > mumps. > > > > > > > > > > > > > > Please see the log file attached. > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < > balay at mcs.anl.gov> > > > > wrote: > > > > > > > > > > > > > > > For some reason - the fortran compiler libraries check > worked fine > > > > > > without > > > > > > > > -lgfortran. > > > > > > > > > > > > > > > > But now - flbaslapack check is failing without it. > > > > > > > > > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > > > > > Hi All, > > > > > > > > > > > > > > > > > > I had an issue when configuring petsc on a linux machine. > I have > > > > the > > > > > > > > > following error message: > > > > > > > > > > > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =============================================================================== > > > > > > > > > > > > > > > > > > TESTING: checkLib from > > > > > > > > > > > > > > > > > > > > > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > > > > configure.log for > > > > > > > > > details): > > > > > > > > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > > > > --download-fblaslapack libraries cannot be used > > > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > > > > > > > > > > > The configuration log was attached. > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 3331141 bytes Desc: not available URL: From balay at mcs.anl.gov Sat Mar 14 15:48:42 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 14 Mar 2020 15:48:42 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --download-hypre=1 --with-debugging=no --with-shared-libraries=1 --download-fblaslapack=1 --download-metis=1 --download-ptscotch=1 --download-parmetis=1 --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 --download-slepc=git://https://gitlab.com/slepc/slepc.git --download-slepc-commit= 59ff81b --with-mpi=1 --with-cxx-dialect=C++11 --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--with-new-dtags=0 -Wl,--gc-sections -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib -L/home/kongf/workhome/rod/miniconda3/lib AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran -lmpifort You are missing quotes with LIBS option - and likely the libraries in the wrong order. Suggest using: LIBS="-lmpifort -lgfortran" or 'LIBS=-lmpifort -lgfortran' Assuming you are invoking configure from shell. Satish On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > to work around - you can try: > > LIBS="-lmpifort -lgfortran" > > Satish > > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > Its the same location as before. For some reason configure is not saving the relevant logs. > > > > I don't understand saveLog() restoreLog() stuff. Matt, can you check on this? > > > > Satish > > > > On Sat, 14 Mar 2020, Fande Kong wrote: > > > > > The configuration crashed earlier than before with your changes. > > > > > > Please see the attached log file when using your branch. The trouble lines > > > should be: > > > > > > " asub=self.mangleFortranFunction("asub") > > > cbody = "extern void "+asub+"(void);\nint main(int argc,char > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > > > " > > > > > > Thanks, > > > > > > Fande, > > > > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay wrote: > > > > > > > I can't figure out what the stack in the attached configure.log. [likely > > > > some stuff isn't getting logged in it] > > > > > > > > Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? > > > > > > > > Satish > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > Thanks, Satish, > > > > > > > > > > But still have the problem. Please see the attached log file. > > > > > > > > > > Thanks, > > > > > > > > > > Fande. > > > > > > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay wrote: > > > > > > > > > > > Can you retry with the attached patch? > > > > > > > > > > > > BTW: Its best to use the latest patched version - i.e > > > > petsc-3.12.4.tar.gz > > > > > > > > > > > > Satish > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > This fixed the fblaslapack issue. Now have another issue about mumps. > > > > > > > > > > > > > > Please see the log file attached. > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay > > > > wrote: > > > > > > > > > > > > > > > For some reason - the fortran compiler libraries check worked fine > > > > > > without > > > > > > > > -lgfortran. > > > > > > > > > > > > > > > > But now - flbaslapack check is failing without it. > > > > > > > > > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > > > > > Hi All, > > > > > > > > > > > > > > > > > > I had an issue when configuring petsc on a linux machine. I have > > > > the > > > > > > > > > following error message: > > > > > > > > > > > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =============================================================================== > > > > > > > > > > > > > > > > > > TESTING: checkLib from > > > > > > > > > > > > > > > > > > > > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > > > > configure.log for > > > > > > > > > details): > > > > > > > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > > > > --download-fblaslapack libraries cannot be used > > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > > > > > > > > > > > The configuration log was attached. > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From fdkong.jd at gmail.com Sat Mar 14 17:37:08 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Sat, 14 Mar 2020 16:37:08 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: OK. I finally got PETSc complied. "-lgfortran" was required by fblaslapack "-lmpifort" was required by mumps. However, I had to manually add the same thing for hypre as well: git diff diff --git a/config/BuildSystem/config/packages/hypre.py b/config/BuildSystem/config/packages/hypre.py index 4d915c312f..f4300230a6 100644 --- a/config/BuildSystem/config/packages/hypre.py +++ b/config/BuildSystem/config/packages/hypre.py @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): args.append('--with-lapack-lib=" "') args.append('--with-blas=no') args.append('--with-lapack=no') + args.append('LIBS="-lmpifort -lgfortran"') if self.openmp.found: args.append('--with-openmp') self.usesopenmp = 'yes' Why hypre could not pick up LIBS options automatically? Thanks, Fande, On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < petsc-users at mcs.anl.gov> wrote: > Configure Options: --configModules=PETSc.Configure > --optionsModule=config.compilerOptions --download-hypre=1 > --with-debugging=no --with-shared-libraries=1 --download-fblaslapack=1 > --download-metis=1 --download-ptscotch=1 --download-parmetis=1 > --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 > --download-slepc=git://https://gitlab.com/slepc/slepc.git > --download-slepc-commit= 59ff81b --with-mpi=1 --with-cxx-dialect=C++11 > --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona > -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 > -ffunction-sections -pipe -isystem > /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2 > -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now > -Wl,--with-new-dtags=0 -Wl,--gc-sections > -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib > -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib > -L/home/kongf/workhome/rod/miniconda3/lib > AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar > --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran -lmpifort > > You are missing quotes with LIBS option - and likely the libraries in the > wrong order. > > Suggest using: > > LIBS="-lmpifort -lgfortran" > or > 'LIBS=-lmpifort -lgfortran' > > Assuming you are invoking configure from shell. > > Satish > > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > to work around - you can try: > > > > LIBS="-lmpifort -lgfortran" > > > > Satish > > > > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > > > Its the same location as before. For some reason configure is not > saving the relevant logs. > > > > > > I don't understand saveLog() restoreLog() stuff. Matt, can you check > on this? > > > > > > Satish > > > > > > On Sat, 14 Mar 2020, Fande Kong wrote: > > > > > > > The configuration crashed earlier than before with your changes. > > > > > > > > Please see the attached log file when using your branch. The trouble > lines > > > > should be: > > > > > > > > " asub=self.mangleFortranFunction("asub") > > > > cbody = "extern void "+asub+"(void);\nint main(int argc,char > > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > > > > " > > > > > > > > Thanks, > > > > > > > > Fande, > > > > > > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay > wrote: > > > > > > > > > I can't figure out what the stack in the attached configure.log. > [likely > > > > > some stuff isn't getting logged in it] > > > > > > > > > > Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? > > > > > > > > > > Satish > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > Thanks, Satish, > > > > > > > > > > > > But still have the problem. Please see the attached log file. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Fande. > > > > > > > > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay > wrote: > > > > > > > > > > > > > Can you retry with the attached patch? > > > > > > > > > > > > > > BTW: Its best to use the latest patched version - i.e > > > > > petsc-3.12.4.tar.gz > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > > > This fixed the fblaslapack issue. Now have another issue > about mumps. > > > > > > > > > > > > > > > > Please see the log file attached. > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < > balay at mcs.anl.gov> > > > > > wrote: > > > > > > > > > > > > > > > > > For some reason - the fortran compiler libraries check > worked fine > > > > > > > without > > > > > > > > > -lgfortran. > > > > > > > > > > > > > > > > > > But now - flbaslapack check is failing without it. > > > > > > > > > > > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > > > > > > > > > > > > > > > > Hi All, > > > > > > > > > > > > > > > > > > > > I had an issue when configuring petsc on a linux > machine. I have > > > > > the > > > > > > > > > > following error message: > > > > > > > > > > > > > > > > > > > > Compiling FBLASLAPACK; this may take several minutes > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =============================================================================== > > > > > > > > > > > > > > > > > > > > TESTING: checkLib from > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > > > > > configure.log for > > > > > > > > > > details): > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > > > > > --download-fblaslapack libraries cannot be used > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The configuration log was attached. > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > > > > > Fande, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Sat Mar 14 18:01:37 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Sat, 14 Mar 2020 17:01:37 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Without touching the configuration file, the option: --download-hypre-configure-arguments='LIBS="-lmpifort -lgfortran"', also works. Thanks, Satish, Fande, On Sat, Mar 14, 2020 at 4:37 PM Fande Kong wrote: > OK. I finally got PETSc complied. > > "-lgfortran" was required by fblaslapack > "-lmpifort" was required by mumps. > > However, I had to manually add the same thing for hypre as well: > > git diff > diff --git a/config/BuildSystem/config/packages/hypre.py > b/config/BuildSystem/config/packages/hypre.py > index 4d915c312f..f4300230a6 100644 > --- a/config/BuildSystem/config/packages/hypre.py > +++ b/config/BuildSystem/config/packages/hypre.py > @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): > args.append('--with-lapack-lib=" "') > args.append('--with-blas=no') > args.append('--with-lapack=no') > + args.append('LIBS="-lmpifort -lgfortran"') > if self.openmp.found: > args.append('--with-openmp') > self.usesopenmp = 'yes' > > > Why hypre could not pick up LIBS options automatically? > > > Thanks, > > Fande, > > > > > On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Configure Options: --configModules=PETSc.Configure >> --optionsModule=config.compilerOptions --download-hypre=1 >> --with-debugging=no --with-shared-libraries=1 --download-fblaslapack=1 >> --download-metis=1 --download-ptscotch=1 --download-parmetis=1 >> --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 >> --download-slepc=git://https://gitlab.com/slepc/slepc.git >> --download-slepc-commit= 59ff81b --with-mpi=1 --with-cxx-dialect=C++11 >> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona >> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 >> -ffunction-sections -pipe -isystem >> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2 >> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now >> -Wl,--with-new-dtags=0 -Wl,--gc-sections >> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib >> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib >> -L/home/kongf/workhome/rod/miniconda3/lib >> AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar >> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran -lmpifort >> >> You are missing quotes with LIBS option - and likely the libraries in the >> wrong order. >> >> Suggest using: >> >> LIBS="-lmpifort -lgfortran" >> or >> 'LIBS=-lmpifort -lgfortran' >> >> Assuming you are invoking configure from shell. >> >> Satish >> >> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: >> >> > to work around - you can try: >> > >> > LIBS="-lmpifort -lgfortran" >> > >> > Satish >> > >> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: >> > >> > > Its the same location as before. For some reason configure is not >> saving the relevant logs. >> > > >> > > I don't understand saveLog() restoreLog() stuff. Matt, can you check >> on this? >> > > >> > > Satish >> > > >> > > On Sat, 14 Mar 2020, Fande Kong wrote: >> > > >> > > > The configuration crashed earlier than before with your changes. >> > > > >> > > > Please see the attached log file when using your branch. The >> trouble lines >> > > > should be: >> > > > >> > > > " asub=self.mangleFortranFunction("asub") >> > > > cbody = "extern void "+asub+"(void);\nint main(int argc,char >> > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; >> > > > " >> > > > >> > > > Thanks, >> > > > >> > > > Fande, >> > > > >> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay >> wrote: >> > > > >> > > > > I can't figure out what the stack in the attached configure.log. >> [likely >> > > > > some stuff isn't getting logged in it] >> > > > > >> > > > > Can you retry with branch 'balay/fix-checkFortranLibraries/maint'? >> > > > > >> > > > > Satish >> > > > > >> > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >> > > > > >> > > > > > Thanks, Satish, >> > > > > > >> > > > > > But still have the problem. Please see the attached log file. >> > > > > > >> > > > > > Thanks, >> > > > > > >> > > > > > Fande. >> > > > > > >> > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay >> wrote: >> > > > > > >> > > > > > > Can you retry with the attached patch? >> > > > > > > >> > > > > > > BTW: Its best to use the latest patched version - i.e >> > > > > petsc-3.12.4.tar.gz >> > > > > > > >> > > > > > > Satish >> > > > > > > >> > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >> > > > > > > >> > > > > > > > This fixed the fblaslapack issue. Now have another issue >> about mumps. >> > > > > > > > >> > > > > > > > Please see the log file attached. >> > > > > > > > >> > > > > > > > Thanks, >> > > > > > > > >> > > > > > > > Fande, >> > > > > > > > >> > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < >> balay at mcs.anl.gov> >> > > > > wrote: >> > > > > > > > >> > > > > > > > > For some reason - the fortran compiler libraries check >> worked fine >> > > > > > > without >> > > > > > > > > -lgfortran. >> > > > > > > > > >> > > > > > > > > But now - flbaslapack check is failing without it. >> > > > > > > > > >> > > > > > > > > To work arround - you can use option LIBS=-lgfortran >> > > > > > > > > >> > > > > > > > > Satish >> > > > > > > > > >> > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >> > > > > > > > > >> > > > > > > > > > Hi All, >> > > > > > > > > > >> > > > > > > > > > I had an issue when configuring petsc on a linux >> machine. I have >> > > > > the >> > > > > > > > > > following error message: >> > > > > > > > > > >> > > > > > > > > > Compiling FBLASLAPACK; this may take several >> minutes >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > >> > > > > > > >> > > > > >> =============================================================================== >> > > > > > > > > > >> > > > > > > > > > TESTING: checkLib from >> > > > > > > > > > >> > > > > > > > > >> > > > > > > >> > > > > >> config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > >> > > > > > > >> > > > > >> ******************************************************************************* >> > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see >> > > > > > > configure.log for >> > > > > > > > > > details): >> > > > > > > > > > >> > > > > > > > > >> > > > > > > >> > > > > >> ------------------------------------------------------------------------------- >> > > > > > > > > > --download-fblaslapack libraries cannot be used >> > > > > > > > > > >> > > > > > > > > >> > > > > > > >> > > > > >> ******************************************************************************* >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > The configuration log was attached. >> > > > > > > > > > >> > > > > > > > > > Thanks, >> > > > > > > > > > >> > > > > > > > > > Fande, >> > > > > > > > > > >> > > > > > > > > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > > >> > > > >> > > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yyang85 at stanford.edu Mon Mar 16 08:04:55 2020 From: yyang85 at stanford.edu (Yuyun Yang) Date: Mon, 16 Mar 2020 13:04:55 +0000 Subject: [petsc-users] Using DMDA with regular matrices Message-ID: Hello team, Hope you're staying healthy amid the coronavirus craziness. Just want to ask a basic question: if my grid is managed by DMDA, then do matrices in all my intermediate steps have to be compatible with DMDA and formed in a special way, or can I just form them as usual (i.e. do MatCreate, MatMPIAIJSetPreallocation, MatSetValue etc.)? Asking this because I'm using a mix of matrix-free and matrix-based operations, and MatShell seems to require the use of DMDA. Thanks for your help. Best, Yuyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Mar 16 08:22:16 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 16 Mar 2020 07:22:16 -0600 Subject: [petsc-users] Using DMDA with regular matrices In-Reply-To: References: Message-ID: <87k13k4epj.fsf@jedbrown.org> Yuyun Yang writes: > Hello team, > > Hope you're staying healthy amid the coronavirus craziness. > > Just want to ask a basic question: if my grid is managed by DMDA, then > do matrices in all my intermediate steps have to be compatible with > DMDA and formed in a special way, or can I just form them as usual > (i.e. do MatCreate, MatMPIAIJSetPreallocation, MatSetValue etc.)? DMCreateMatrix does the creation and preallocation for you, and enables you to use MatSetValuesStencil. It's otherwise the same. Sometimes people use auxiliary DMs to describe distribution of values that aren't the same size/shape as solution variables (e.g., cell-centered tensor-valued coefficients), though usually those are accessed in your residual and Jacobian assembly functions, not via matrices. > Asking this because I'm using a mix of matrix-free and matrix-based > operations, and MatShell seems to require the use of DMDA. MatShell has no dependence on DMDA. It's entirely in your hands, via the callback(s) that you implement. From yyang85 at stanford.edu Mon Mar 16 09:08:22 2020 From: yyang85 at stanford.edu (Yuyun Yang) Date: Mon, 16 Mar 2020 14:08:22 +0000 Subject: [petsc-users] Using DMDA with regular matrices In-Reply-To: <87k13k4epj.fsf@jedbrown.org> References: , <87k13k4epj.fsf@jedbrown.org> Message-ID: Great, thanks so much for the explanations! ?? Outlook for Android ________________________________ From: Jed Brown Sent: Monday, March 16, 2020 9:22:16 PM To: Yuyun Yang ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Using DMDA with regular matrices Yuyun Yang writes: > Hello team, > > Hope you're staying healthy amid the coronavirus craziness. > > Just want to ask a basic question: if my grid is managed by DMDA, then > do matrices in all my intermediate steps have to be compatible with > DMDA and formed in a special way, or can I just form them as usual > (i.e. do MatCreate, MatMPIAIJSetPreallocation, MatSetValue etc.)? DMCreateMatrix does the creation and preallocation for you, and enables you to use MatSetValuesStencil. It's otherwise the same. Sometimes people use auxiliary DMs to describe distribution of values that aren't the same size/shape as solution variables (e.g., cell-centered tensor-valued coefficients), though usually those are accessed in your residual and Jacobian assembly functions, not via matrices. > Asking this because I'm using a mix of matrix-free and matrix-based > operations, and MatShell seems to require the use of DMDA. MatShell has no dependence on DMDA. It's entirely in your hands, via the callback(s) that you implement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Mon Mar 16 15:31:25 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Mon, 16 Mar 2020 14:31:25 -0600 Subject: [petsc-users] About the initial guess for KSP method. Message-ID: Hi, all, I am testing KSPSetInitialGuessNonzero using the case https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex50.c.html This case is special. WIth zero initial guess, 1 iteration can deliver the exact solution. But when I tried true for KSPSetInitialGuessNonzero, it shows the same convergence history as false. Does the code change KSPSetInitialGuessNonzero to false automatically? Take care. Thanks, Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Mon Mar 16 16:19:51 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Mon, 16 Mar 2020 22:19:51 +0100 Subject: [petsc-users] node DG with DMPlex Message-ID: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> Hi all, I would like to implement a nodal DG with the DMPlex interface. Therefore, i must add the internal nodes to the DM (GLL nodes), with the constrains : 1) Add them as solution points, with correct coordinates (and keep the good rotational ordering) 2) Find the shared nodes at faces in order to compute the fluxes 3) For parallel use, so synchronize the ghost node at each time steps I found elements of answers in those threads : https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html However, it's not clear for me where to begin. Quoting Matt, i should : " DMGetCoordinateDM(dm, &cdm); DMCreateLocalVector(cdm, &coordinatesLocal); DMSetCoordinatesLocal(dm, coordinatesLocal);" However, i will not create ghost nodes this way. And i'm not sure to keep the good ordering. This part should be implemented in the PetscFE interface, for high order discrete solutions. I did not succeed in finding the correct part of the source doing it. Could you please give me some hint to begin correctly thoses tasks ? Thanks, Yann From sajidsyed2021 at u.northwestern.edu Mon Mar 16 19:29:01 2020 From: sajidsyed2021 at u.northwestern.edu (Sajid Ali) Date: Mon, 16 Mar 2020 19:29:01 -0500 Subject: [petsc-users] GAMG parameters for ideal coarsening ratio Message-ID: Hi PETSc-developers, As per the manual, the ideal gamg parameters are those which result in MatPtAP time being roughly similar to (or just slightly larger) than KSP solve times. The way to adjust this is by changing the threshold for coarsening and/or squaring the graph. I was working with a grid of size 2^14 by 2^14 in a linear & time-independent TS with the following params : #PETSc Option Table entries: -ksp_monitor -ksp_rtol 1e-5 -ksp_type fgmres -ksp_view -log_view -mg_levels_ksp_type gmres -mg_levels_pc_type jacobi -pc_gamg_coarse_eq_limit 1000 -pc_gamg_reuse_interpolation true -pc_gamg_square_graph 10 -pc_gamg_threshold -0.04 -pc_gamg_type agg -pc_gamg_use_parallel_coarse_grid_solver -pc_mg_monitor -pc_type gamg -prop_steps 8 -ts_monitor -ts_type cn #End of PETSc Option Table entries With this I get a grid complexity of 1.33047, 6 multigrid levels, MatPtAP/KSPSolve ratio of 0.24, and the linear solve at each TS step takes 5 iterations (with approx one order of magnitude reduction in residual per step for iterations 2 through 5 and two orders for the first). The convergence and grid complexity look good, but the ratio of grid coarsening time to ksp-solve time is far from ideal. I've attached the log file from this set of base parameters as well. To investigate the effect of coarsening rates, I ran a parameter sweep over the coarsening parameters (threshold and sq. graph) and I'm confused by the results. For some reason either the number of gamg levels turns out to be too high or it is set to 1. When I try to manually set the number of levels to 4 (with pc_mg_levels 4 and thres. -0.04/ squaring of 10) I see performance much worse than the base parameters. Any advice as to what I'm missing in my search for a set of params where MatPtAP to KSPSolve is ~ 1 ? [image: image.png] Thanks in advance for the help and hope everyone is staying safe from the virus! -- Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 108837 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_gamg Type: application/octet-stream Size: 105913 bytes Desc: not available URL: From jed at jedbrown.org Mon Mar 16 20:08:35 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 16 Mar 2020 19:08:35 -0600 Subject: [petsc-users] GAMG parameters for ideal coarsening ratio In-Reply-To: References: Message-ID: <87y2rz23fw.fsf@jedbrown.org> Sajid Ali writes: > Hi PETSc-developers, > > As per the manual, the ideal gamg parameters are those which result in > MatPtAP time being roughly similar to (or just slightly larger) than KSP > solve times. The way to adjust this is by changing the threshold for > coarsening and/or squaring the graph. I was working with a grid of size > 2^14 by 2^14 in a linear & time-independent TS with the following params : > > #PETSc Option Table entries: > -ksp_monitor > -ksp_rtol 1e-5 > -ksp_type fgmres > -ksp_view > -log_view > -mg_levels_ksp_type gmres > -mg_levels_pc_type jacobi > -pc_gamg_coarse_eq_limit 1000 > -pc_gamg_reuse_interpolation true > -pc_gamg_square_graph 10 > -pc_gamg_threshold -0.04 > -pc_gamg_type agg > -pc_gamg_use_parallel_coarse_grid_solver > -pc_mg_monitor > -pc_type gamg > -prop_steps 8 > -ts_monitor > -ts_type cn > #End of PETSc Option Table entries > > With this I get a grid complexity of 1.33047, 6 multigrid levels, > MatPtAP/KSPSolve ratio of 0.24, and the linear solve at each TS step takes > 5 iterations (with approx one order of magnitude reduction in residual per > step for iterations 2 through 5 and two orders for the first). The > convergence and grid complexity look good, but the ratio of grid coarsening > time to ksp-solve time is far from ideal. I've attached the log file from > this set of base parameters as well. > > To investigate the effect of coarsening rates, I ran a parameter sweep over > the coarsening parameters (threshold and sq. graph) and I'm confused by the > results. For some reason either the number of gamg levels turns out to be > too high or it is set to 1. When I try to manually set the number of levels > to 4 (with pc_mg_levels 4 and thres. -0.04/ squaring of 10) I see > performance much worse than the base parameters. Any advice as to what I'm > missing in my search for a set of params where MatPtAP to KSPSolve is ~ 1 ? Your solver looks efficient and the time to setup roughly matches the solve time: PCSetUp 8 1.0 1.2202e+02 1.0 4.39e+09 1.0 4.9e+05 6.5e+03 6.3e+02 36 12 19 27 21 36 12 19 27 22 9201 PCApply 40 1.0 1.1077e+02 1.0 2.63e+10 1.0 2.0e+06 3.8e+03 2.0e+03 33 72 79 65 68 33 72 79 65 68 60662 If you have a specific need to reduce setup time or reduce solve time (e.g., if you'll do many solves with the same setup), you might be able to adjust. But your iteration count is pretty low so probably not a lot of room in that direction. From mfadams at lbl.gov Mon Mar 16 21:56:08 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 16 Mar 2020 22:56:08 -0400 Subject: [petsc-users] GAMG parameters for ideal coarsening ratio In-Reply-To: References: Message-ID: On Mon, Mar 16, 2020 at 8:31 PM Sajid Ali wrote: > Hi PETSc-developers, > > As per the manual, the ideal gamg parameters are those which result in > MatPtAP time being roughly similar to (or just slightly larger) than KSP > solve times. The way to adjust this is by changing the threshold for > coarsening and/or squaring the graph. I was working with a grid of size > 2^14 by 2^14 in a linear & time-independent TS with the following params : > > #PETSc Option Table entries: > -ksp_monitor > -ksp_rtol 1e-5 > -ksp_type fgmres > -ksp_view > -log_view > -mg_levels_ksp_type gmres > -mg_levels_pc_type jacobi > -pc_gamg_coarse_eq_limit 1000 > -pc_gamg_reuse_interpolation true > -pc_gamg_square_graph 10 > -pc_gamg_threshold -0.04 > -pc_gamg_type agg > -pc_gamg_use_parallel_coarse_grid_solver > -pc_mg_monitor > -pc_type gamg > -prop_steps 8 > -ts_monitor > -ts_type cn > #End of PETSc Option Table entries > > With this I get a grid complexity of 1.33047, 6 multigrid levels, > MatPtAP/KSPSolve ratio of 0.24, and the linear solve at each TS step takes > 5 iterations (with approx one order of magnitude reduction in residual per > step for iterations 2 through 5 and two orders for the first). The > convergence and grid complexity look good, but the ratio of grid coarsening > time to ksp-solve time is far from ideal. I've attached the log file from > this set of base parameters as well. > Just a few notes, * a negative threshold makes no sense, but I use it as a flag to keep all matrix entries, even 0.0, when the graph, used for coarsening, is created. A threshold of zero says drop only edges with 0 weight. All edge weights are non-negative by construction. A threshold >= 0 says drop edges that are <= the threshold. A threshold of 0.1 is very very high. So you probably want to scan 0-0.1. * -pc_gamg_square_graph N, says square the graph on the first N levels. So "10" is basically infinity, although for 2D problems you can hit 10. > > To investigate the effect of coarsening rates, I ran a parameter sweep > over the coarsening parameters (threshold and sq. graph) and I'm confused > by the results. For some reason either the number of gamg levels turns out > to be too high or it is set to 1. > Are you solving a mass matrix? A lot of coarsening parameters can decimate all the edges of a FE mass matrix, in which case GAMG just gives you a one level solver. Which is a good choice for a mass matrix. > When I try to manually set the number of levels to 4 (with pc_mg_levels 4 > and thres. -0.04/ squaring of 10) I see performance much worse than the > base parameters. Any advice as to what I'm missing in my search for a set > of params where MatPtAP to KSPSolve is ~ 1 ? > If you are not using a Laplacian as the test then use that. That will help to get us on the same page. If you are using a Laplacian then we need to get some more data from you (ie, run with -info and grep on GAMG). Mark > > [image: image.png] > > Thanks in advance for the help and hope everyone is staying safe from the > virus! > > > -- > Sajid Ali | PhD Candidate > Applied Physics > Northwestern University > s-sajid-ali.github.io > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 108837 bytes Desc: not available URL: From yang.bo at ntu.edu.sg Tue Mar 17 07:23:14 2020 From: yang.bo at ntu.edu.sg (Yang Bo (Asst Prof)) Date: Tue, 17 Mar 2020 12:23:14 +0000 Subject: [petsc-users] Random initial states of EPSSolve Message-ID: <328C0461-B0F0-4A90-B385-408F7D1FDFC0@ntu.edu.sg> Hi everyone, I am diagonalising a large symmetric real matrix for its null space (highly degenerate eigenstates with zero eigenvalues). I am using krylovschur which is variational, which is supposed to start with a set of random initial vectors. They will eventually converge to random vectors in the null space. The problem is that if I run the EPSSOLVE with the same set of parameters (e.g. -eps_ncv and -eps_mpd), I always get the same eigenstates in the null space. This implies that the solver always start the same set of initial ?random? vectors. How does petsc generate the initial random vectors for krylovschur? Is there a way for me to generate different random initial vectors every time I run the diagonalisation (of the same matrix)? Thanks, and stay safe and healthy! Cheers, Yang Bo ________________________________ CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. Towards a sustainable earth: Print only when necessary. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Tue Mar 17 07:28:13 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 17 Mar 2020 13:28:13 +0100 Subject: [petsc-users] Random initial states of EPSSolve In-Reply-To: <328C0461-B0F0-4A90-B385-408F7D1FDFC0@ntu.edu.sg> References: <328C0461-B0F0-4A90-B385-408F7D1FDFC0@ntu.edu.sg> Message-ID: <2FCFF22E-5A7A-47E4-8EE8-CDF368A992DD@dsic.upv.es> You can set a different seed for the random number generator as follows: - Use EPSGetBV() to extract the BV object - Use BVGetRandomContext() to extract the PetscRandom object - Use PetscRandomSetSeed() to set the new seed. Jose > El 17 mar 2020, a las 13:23, Yang Bo (Asst Prof) escribi?: > > Hi everyone, > > I am diagonalising a large symmetric real matrix for its null space (highly degenerate eigenstates with zero eigenvalues). I am using krylovschur which is variational, which is supposed to start with a set of random initial vectors. They will eventually converge to random vectors in the null space. > > The problem is that if I run the EPSSOLVE with the same set of parameters (e.g. -eps_ncv and -eps_mpd), I always get the same eigenstates in the null space. This implies that the solver always start the same set of initial ?random? vectors. > > How does petsc generate the initial random vectors for krylovschur? Is there a way for me to generate different random initial vectors every time I run the diagonalisation (of the same matrix)? > > Thanks, and stay safe and healthy! > > Cheers, > > Yang Bo > CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents. > Towards a sustainable earth: Print only when necessary. Thank you. > From knepley at gmail.com Tue Mar 17 10:00:07 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Mar 2020 11:00:07 -0400 Subject: [petsc-users] node DG with DMPlex In-Reply-To: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> References: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> Message-ID: On Mon, Mar 16, 2020 at 5:20 PM Yann Jobic wrote: > Hi all, > > I would like to implement a nodal DG with the DMPlex interface. > Therefore, i must add the internal nodes to the DM (GLL nodes), with the > constrains : > 1) Add them as solution points, with correct coordinates (and keep the > good rotational ordering) > 2) Find the shared nodes at faces in order to compute the fluxes > 3) For parallel use, so synchronize the ghost node at each time steps > Let me get the fundamentals straight before advising, since I have never implemented nodal DG. 1) What is shared? We have an implementation of spectral element ordering ( https://gitlab.com/petsc/petsc/-/blob/master/src/dm/impls/plex/examples/tutorials/ex6.c). Those share the whole element boundary. 2) What ghosts do you need? 3) You want to store real space coordinates for a quadrature? We usually define a quadrature on the reference element once. Thanks, Matt > I found elements of answers in those threads : > https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html > > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html > > However, it's not clear for me where to begin. > > Quoting Matt, i should : > " DMGetCoordinateDM(dm, &cdm); > > DMCreateLocalVector(cdm, &coordinatesLocal); > > DMSetCoordinatesLocal(dm, coordinatesLocal);" > > However, i will not create ghost nodes this way. And i'm not sure to > keep the good ordering. > This part should be implemented in the PetscFE interface, for high order > discrete solutions. > I did not succeed in finding the correct part of the source doing it. > > Could you please give me some hint to begin correctly thoses tasks ? > > Thanks, > > Yann > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Tue Mar 17 10:08:42 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Tue, 17 Mar 2020 09:08:42 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Hi Satish, Could you merge your branch, balay/fix-checkFortranLibraries/maint, into maint? I added glibc to my conda environment (conda install -c dan_blanchard glibc), and your branch ran well. If you are interested, I attached the successful log file here. Thanks, Fande On Sat, Mar 14, 2020 at 5:01 PM Fande Kong wrote: > Without touching the configuration file, the > option: --download-hypre-configure-arguments='LIBS="-lmpifort -lgfortran"', > also works. > > > Thanks, Satish, > > > Fande, > > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong wrote: > >> OK. I finally got PETSc complied. >> >> "-lgfortran" was required by fblaslapack >> "-lmpifort" was required by mumps. >> >> However, I had to manually add the same thing for hypre as well: >> >> git diff >> diff --git a/config/BuildSystem/config/packages/hypre.py >> b/config/BuildSystem/config/packages/hypre.py >> index 4d915c312f..f4300230a6 100644 >> --- a/config/BuildSystem/config/packages/hypre.py >> +++ b/config/BuildSystem/config/packages/hypre.py >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): >> args.append('--with-lapack-lib=" "') >> args.append('--with-blas=no') >> args.append('--with-lapack=no') >> + args.append('LIBS="-lmpifort -lgfortran"') >> if self.openmp.found: >> args.append('--with-openmp') >> self.usesopenmp = 'yes' >> >> >> Why hypre could not pick up LIBS options automatically? >> >> >> Thanks, >> >> Fande, >> >> >> >> >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >>> Configure Options: --configModules=PETSc.Configure >>> --optionsModule=config.compilerOptions --download-hypre=1 >>> --with-debugging=no --with-shared-libraries=1 --download-fblaslapack=1 >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1 >>> --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git >>> --download-slepc-commit= 59ff81b --with-mpi=1 --with-cxx-dialect=C++11 >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 >>> -ffunction-sections -pipe -isystem >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2 >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib >>> -L/home/kongf/workhome/rod/miniconda3/lib >>> AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran -lmpifort >>> >>> You are missing quotes with LIBS option - and likely the libraries in >>> the wrong order. >>> >>> Suggest using: >>> >>> LIBS="-lmpifort -lgfortran" >>> or >>> 'LIBS=-lmpifort -lgfortran' >>> >>> Assuming you are invoking configure from shell. >>> >>> Satish >>> >>> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: >>> >>> > to work around - you can try: >>> > >>> > LIBS="-lmpifort -lgfortran" >>> > >>> > Satish >>> > >>> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: >>> > >>> > > Its the same location as before. For some reason configure is not >>> saving the relevant logs. >>> > > >>> > > I don't understand saveLog() restoreLog() stuff. Matt, can you check >>> on this? >>> > > >>> > > Satish >>> > > >>> > > On Sat, 14 Mar 2020, Fande Kong wrote: >>> > > >>> > > > The configuration crashed earlier than before with your changes. >>> > > > >>> > > > Please see the attached log file when using your branch. The >>> trouble lines >>> > > > should be: >>> > > > >>> > > > " asub=self.mangleFortranFunction("asub") >>> > > > cbody = "extern void "+asub+"(void);\nint main(int argc,char >>> > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; >>> > > > " >>> > > > >>> > > > Thanks, >>> > > > >>> > > > Fande, >>> > > > >>> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay >>> wrote: >>> > > > >>> > > > > I can't figure out what the stack in the attached configure.log. >>> [likely >>> > > > > some stuff isn't getting logged in it] >>> > > > > >>> > > > > Can you retry with branch >>> 'balay/fix-checkFortranLibraries/maint'? >>> > > > > >>> > > > > Satish >>> > > > > >>> > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >>> > > > > >>> > > > > > Thanks, Satish, >>> > > > > > >>> > > > > > But still have the problem. Please see the attached log file. >>> > > > > > >>> > > > > > Thanks, >>> > > > > > >>> > > > > > Fande. >>> > > > > > >>> > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay < >>> balay at mcs.anl.gov> wrote: >>> > > > > > >>> > > > > > > Can you retry with the attached patch? >>> > > > > > > >>> > > > > > > BTW: Its best to use the latest patched version - i.e >>> > > > > petsc-3.12.4.tar.gz >>> > > > > > > >>> > > > > > > Satish >>> > > > > > > >>> > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >>> > > > > > > >>> > > > > > > > This fixed the fblaslapack issue. Now have another issue >>> about mumps. >>> > > > > > > > >>> > > > > > > > Please see the log file attached. >>> > > > > > > > >>> > > > > > > > Thanks, >>> > > > > > > > >>> > > > > > > > Fande, >>> > > > > > > > >>> > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < >>> balay at mcs.anl.gov> >>> > > > > wrote: >>> > > > > > > > >>> > > > > > > > > For some reason - the fortran compiler libraries check >>> worked fine >>> > > > > > > without >>> > > > > > > > > -lgfortran. >>> > > > > > > > > >>> > > > > > > > > But now - flbaslapack check is failing without it. >>> > > > > > > > > >>> > > > > > > > > To work arround - you can use option LIBS=-lgfortran >>> > > > > > > > > >>> > > > > > > > > Satish >>> > > > > > > > > >>> > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >>> > > > > > > > > >>> > > > > > > > > > Hi All, >>> > > > > > > > > > >>> > > > > > > > > > I had an issue when configuring petsc on a linux >>> machine. I have >>> > > > > the >>> > > > > > > > > > following error message: >>> > > > > > > > > > >>> > > > > > > > > > Compiling FBLASLAPACK; this may take several >>> minutes >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> =============================================================================== >>> > > > > > > > > > >>> > > > > > > > > > TESTING: checkLib from >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> ******************************************************************************* >>> > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see >>> > > > > > > configure.log for >>> > > > > > > > > > details): >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> ------------------------------------------------------------------------------- >>> > > > > > > > > > --download-fblaslapack libraries cannot be used >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> ******************************************************************************* >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > The configuration log was attached. >>> > > > > > > > > > >>> > > > > > > > > > Thanks, >>> > > > > > > > > > >>> > > > > > > > > > Fande, >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > > >>> > > > >>> > > >>> > >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 3626620 bytes Desc: not available URL: From knepley at gmail.com Tue Mar 17 10:17:15 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Mar 2020 11:17:15 -0400 Subject: [petsc-users] About the initial guess for KSP method. In-Reply-To: References: Message-ID: On Mon, Mar 16, 2020 at 4:32 PM Xiaodong Liu wrote: > Hi, all, > > I am testing KSPSetInitialGuessNonzero using the case > > > https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex50.c.html > > This case is special. WIth zero initial guess, 1 iteration can deliver the > exact solution. But when I tried true for KSPSetInitialGuessNonzero, it > shows the same convergence history as false. > > Does the code change KSPSetInitialGuessNonzero to false automatically? > For that flag to make any difference, you have to pass in a nonzero vector to KSPSolve(). This example does not do that. Were you doing that? Thanks, Matt > Take care. > > Thanks, > > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Tue Mar 17 10:19:08 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Tue, 17 Mar 2020 08:19:08 -0700 Subject: [petsc-users] About the initial guess for KSP method. In-Reply-To: References: Message-ID: Thanks. It is very useful. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Tue, Mar 17, 2020 at 8:17 AM Matthew Knepley wrote: > On Mon, Mar 16, 2020 at 4:32 PM Xiaodong Liu wrote: > >> Hi, all, >> >> I am testing KSPSetInitialGuessNonzero using the case >> >> >> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex50.c.html >> >> This case is special. WIth zero initial guess, 1 iteration can deliver >> the exact solution. But when I tried true for KSPSetInitialGuessNonzero, it >> shows the same convergence history as false. >> >> Does the code change KSPSetInitialGuessNonzero to false automatically? >> > > For that flag to make any difference, you have to pass in a nonzero vector > to KSPSolve(). This example does not do that. > Were you doing that? > > Thanks, > > Matt > > >> Take care. >> >> Thanks, >> >> Xiaodong Liu, PhD >> X: Computational Physics Division >> Los Alamos National Laboratory >> P.O. Box 1663, >> Los Alamos, NM 87544 >> 505-709-0534 >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 17 10:23:57 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 17 Mar 2020 10:23:57 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: So what was the initial problem? Did conda install gcc without glibc? Or was it using the wrong glibc? Because the compiler appeared partly functional [well the build worked with just LIBS="-lmpifort -lgfortran"] And after the correct glibc was installed - did current maint still fail to build? Can you send configure.log for this? And its not clear to me why balay/fix-checkFortranLibraries/maint broke before this fix. [for one configure.log was incomplete] Satish On Tue, 17 Mar 2020, Fande Kong wrote: > Hi Satish, > > Could you merge your branch, balay/fix-checkFortranLibraries/maint, into > maint? > > I added glibc to my conda environment (conda install -c dan_blanchard > glibc), and your branch ran well. > > If you are interested, I attached the successful log file here. > > Thanks, > > Fande > > On Sat, Mar 14, 2020 at 5:01 PM Fande Kong wrote: > > > Without touching the configuration file, the > > option: --download-hypre-configure-arguments='LIBS="-lmpifort -lgfortran"', > > also works. > > > > > > Thanks, Satish, > > > > > > Fande, > > > > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong wrote: > > > >> OK. I finally got PETSc complied. > >> > >> "-lgfortran" was required by fblaslapack > >> "-lmpifort" was required by mumps. > >> > >> However, I had to manually add the same thing for hypre as well: > >> > >> git diff > >> diff --git a/config/BuildSystem/config/packages/hypre.py > >> b/config/BuildSystem/config/packages/hypre.py > >> index 4d915c312f..f4300230a6 100644 > >> --- a/config/BuildSystem/config/packages/hypre.py > >> +++ b/config/BuildSystem/config/packages/hypre.py > >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): > >> args.append('--with-lapack-lib=" "') > >> args.append('--with-blas=no') > >> args.append('--with-lapack=no') > >> + args.append('LIBS="-lmpifort -lgfortran"') > >> if self.openmp.found: > >> args.append('--with-openmp') > >> self.usesopenmp = 'yes' > >> > >> > >> Why hypre could not pick up LIBS options automatically? > >> > >> > >> Thanks, > >> > >> Fande, > >> > >> > >> > >> > >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < > >> petsc-users at mcs.anl.gov> wrote: > >> > >>> Configure Options: --configModules=PETSc.Configure > >>> --optionsModule=config.compilerOptions --download-hypre=1 > >>> --with-debugging=no --with-shared-libraries=1 --download-fblaslapack=1 > >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1 > >>> --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 > >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git > >>> --download-slepc-commit= 59ff81b --with-mpi=1 --with-cxx-dialect=C++11 > >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona > >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 > >>> -ffunction-sections -pipe -isystem > >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2 > >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now > >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections > >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib > >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib > >>> -L/home/kongf/workhome/rod/miniconda3/lib > >>> AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar > >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran -lmpifort > >>> > >>> You are missing quotes with LIBS option - and likely the libraries in > >>> the wrong order. > >>> > >>> Suggest using: > >>> > >>> LIBS="-lmpifort -lgfortran" > >>> or > >>> 'LIBS=-lmpifort -lgfortran' > >>> > >>> Assuming you are invoking configure from shell. > >>> > >>> Satish > >>> > >>> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > >>> > >>> > to work around - you can try: > >>> > > >>> > LIBS="-lmpifort -lgfortran" > >>> > > >>> > Satish > >>> > > >>> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > >>> > > >>> > > Its the same location as before. For some reason configure is not > >>> saving the relevant logs. > >>> > > > >>> > > I don't understand saveLog() restoreLog() stuff. Matt, can you check > >>> on this? > >>> > > > >>> > > Satish > >>> > > > >>> > > On Sat, 14 Mar 2020, Fande Kong wrote: > >>> > > > >>> > > > The configuration crashed earlier than before with your changes. > >>> > > > > >>> > > > Please see the attached log file when using your branch. The > >>> trouble lines > >>> > > > should be: > >>> > > > > >>> > > > " asub=self.mangleFortranFunction("asub") > >>> > > > cbody = "extern void "+asub+"(void);\nint main(int argc,char > >>> > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > >>> > > > " > >>> > > > > >>> > > > Thanks, > >>> > > > > >>> > > > Fande, > >>> > > > > >>> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay > >>> wrote: > >>> > > > > >>> > > > > I can't figure out what the stack in the attached configure.log. > >>> [likely > >>> > > > > some stuff isn't getting logged in it] > >>> > > > > > >>> > > > > Can you retry with branch > >>> 'balay/fix-checkFortranLibraries/maint'? > >>> > > > > > >>> > > > > Satish > >>> > > > > > >>> > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > >>> > > > > > >>> > > > > > Thanks, Satish, > >>> > > > > > > >>> > > > > > But still have the problem. Please see the attached log file. > >>> > > > > > > >>> > > > > > Thanks, > >>> > > > > > > >>> > > > > > Fande. > >>> > > > > > > >>> > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay < > >>> balay at mcs.anl.gov> wrote: > >>> > > > > > > >>> > > > > > > Can you retry with the attached patch? > >>> > > > > > > > >>> > > > > > > BTW: Its best to use the latest patched version - i.e > >>> > > > > petsc-3.12.4.tar.gz > >>> > > > > > > > >>> > > > > > > Satish > >>> > > > > > > > >>> > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > >>> > > > > > > > >>> > > > > > > > This fixed the fblaslapack issue. Now have another issue > >>> about mumps. > >>> > > > > > > > > >>> > > > > > > > Please see the log file attached. > >>> > > > > > > > > >>> > > > > > > > Thanks, > >>> > > > > > > > > >>> > > > > > > > Fande, > >>> > > > > > > > > >>> > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < > >>> balay at mcs.anl.gov> > >>> > > > > wrote: > >>> > > > > > > > > >>> > > > > > > > > For some reason - the fortran compiler libraries check > >>> worked fine > >>> > > > > > > without > >>> > > > > > > > > -lgfortran. > >>> > > > > > > > > > >>> > > > > > > > > But now - flbaslapack check is failing without it. > >>> > > > > > > > > > >>> > > > > > > > > To work arround - you can use option LIBS=-lgfortran > >>> > > > > > > > > > >>> > > > > > > > > Satish > >>> > > > > > > > > > >>> > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > >>> > > > > > > > > > >>> > > > > > > > > > Hi All, > >>> > > > > > > > > > > >>> > > > > > > > > > I had an issue when configuring petsc on a linux > >>> machine. I have > >>> > > > > the > >>> > > > > > > > > > following error message: > >>> > > > > > > > > > > >>> > > > > > > > > > Compiling FBLASLAPACK; this may take several > >>> minutes > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > >>> =============================================================================== > >>> > > > > > > > > > > >>> > > > > > > > > > TESTING: checkLib from > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > >>> config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > >>> ******************************************************************************* > >>> > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > >>> > > > > > > configure.log for > >>> > > > > > > > > > details): > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > >>> ------------------------------------------------------------------------------- > >>> > > > > > > > > > --download-fblaslapack libraries cannot be used > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > >>> ******************************************************************************* > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > The configuration log was attached. > >>> > > > > > > > > > > >>> > > > > > > > > > Thanks, > >>> > > > > > > > > > > >>> > > > > > > > > > Fande, > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >> > From sajidsyed2021 at u.northwestern.edu Tue Mar 17 12:41:30 2020 From: sajidsyed2021 at u.northwestern.edu (Sajid Ali) Date: Tue, 17 Mar 2020 12:41:30 -0500 Subject: [petsc-users] GAMG parameters for ideal coarsening ratio In-Reply-To: References: Message-ID: Hi Mark/Jed, The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy + F_t*u, with the familiar 5 point central difference as the derivative approximation, I'm also attaching the result of -info | grep GAMG if that helps). My goal is to get weak and strong scaling results for the FD solver (leading me to double check all my parameters). I ran the sweep again as Mark suggested and it looks like my base params were close to optimal ( negative threshold and 10 levels of squaring with gmres/jacobi smoothers (chebyshev/sor is slower)). [image: image.png] While I think that the base parameters should work well for strong scaling, do I have to modify any of my parameters for a weak scaling run ? Does GAMG automatically increase the number of mg-levels as grid size increases or is it upon the user to do that ? @Mark : Is there a GAMG implementation paper I should cite ? I've already added a citation for the Comput. Mech. (2007) 39: 497?507 as a reference for the general idea of applying agglomeration type multigrid preconditioning to helmholtz operators. Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 86644 bytes Desc: not available URL: From mfadams at lbl.gov Tue Mar 17 13:41:50 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 17 Mar 2020 14:41:50 -0400 Subject: [petsc-users] GAMG parameters for ideal coarsening ratio In-Reply-To: References: Message-ID: On Tue, Mar 17, 2020 at 1:42 PM Sajid Ali wrote: > Hi Mark/Jed, > > The problem I'm solving is scalar helmholtz in 2D, (u_t = A*u_xx + A*u_yy > + F_t*u, with the familiar 5 point central difference as the derivative > approximation, > I assume this is definite HelmHoltz. The time integrator will also add a mass term. I'm assuming F_t looks like a mass matrix. > I'm also attaching the result of -info | grep GAMG if that helps). My goal > is to get weak and strong scaling results for the FD solver (leading me to > double check all my parameters). I ran the sweep again as Mark suggested > and it looks like my base params were close to optimal ( negative threshold > and 10 levels of squaring > For low order discretizations, squaring every level, as you are doing, sound right. And the mass matrix confuses GAMG's filtering heuristics so no filter sounds reasonable. Note, hypre would do better than GAMG on this problem. > with gmres/jacobi smoothers (chebyshev/sor is slower)). > You don't want to use GMRES as a smoother (unless you have indefinite Helmholtz). SOR will be more expensive but often converges a lot faster. chebyshev/jacobi would probably be better for you. And you want CG (-ksp_type cg) if this system is symmetric positive definite. > > [image: image.png] > > While I think that the base parameters should work well for strong > scaling, do I have to modify any of my parameters for a weak scaling run ? > Does GAMG automatically increase the number of mg-levels as grid size > increases or is it upon the user to do that ? > > @Mark : Is there a GAMG implementation paper I should cite ? I've already > added a citation for the Comput. Mech. (2007) 39: 497?507 as a reference > for the general idea of applying agglomeration type multigrid > preconditioning to helmholtz operators. > > > Thank You, > Sajid Ali | PhD Candidate > Applied Physics > Northwestern University > s-sajid-ali.github.io > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 86644 bytes Desc: not available URL: From fdkong.jd at gmail.com Tue Mar 17 14:59:22 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Tue, 17 Mar 2020 13:59:22 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: On Tue, Mar 17, 2020 at 9:24 AM Satish Balay wrote: > So what was the initial problem? Did conda install gcc without glibc? Or > was it using the wrong glibc? > Looks like GCC installed by conda uses an old version of glibc (2.12). > Because the compiler appeared partly functional [well the build worked > with just LIBS="-lmpifort -lgfortran"] > > And after the correct glibc was installed - did current maint still fail > to build? > Still failed because PETSc claimed that: there were no needed fortran libraries when using mpicc as the linker. But in fact, we need these fortran stuffs when linking blaslapack and mumps. > > Can you send configure.log for this? > > And its not clear to me why balay/fix-checkFortranLibraries/maint broke > before this fix. [for one configure.log was incomplete] > I am not 100% sure, but I think the complied and linked executable can not run because of "glibc_2.14' not found". The version of glibc was too low. So current solution for me is that: your branch + a new version of glibc (2.18). Thanks, Fande, > > Satish > > On Tue, 17 Mar 2020, Fande Kong wrote: > > > Hi Satish, > > > > Could you merge your branch, balay/fix-checkFortranLibraries/maint, into > > maint? > > > > I added glibc to my conda environment (conda install -c dan_blanchard > > glibc), and your branch ran well. > > > > If you are interested, I attached the successful log file here. > > > > Thanks, > > > > Fande > > > > On Sat, Mar 14, 2020 at 5:01 PM Fande Kong wrote: > > > > > Without touching the configuration file, the > > > option: --download-hypre-configure-arguments='LIBS="-lmpifort > -lgfortran"', > > > also works. > > > > > > > > > Thanks, Satish, > > > > > > > > > Fande, > > > > > > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong > wrote: > > > > > >> OK. I finally got PETSc complied. > > >> > > >> "-lgfortran" was required by fblaslapack > > >> "-lmpifort" was required by mumps. > > >> > > >> However, I had to manually add the same thing for hypre as well: > > >> > > >> git diff > > >> diff --git a/config/BuildSystem/config/packages/hypre.py > > >> b/config/BuildSystem/config/packages/hypre.py > > >> index 4d915c312f..f4300230a6 100644 > > >> --- a/config/BuildSystem/config/packages/hypre.py > > >> +++ b/config/BuildSystem/config/packages/hypre.py > > >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): > > >> args.append('--with-lapack-lib=" "') > > >> args.append('--with-blas=no') > > >> args.append('--with-lapack=no') > > >> + args.append('LIBS="-lmpifort -lgfortran"') > > >> if self.openmp.found: > > >> args.append('--with-openmp') > > >> self.usesopenmp = 'yes' > > >> > > >> > > >> Why hypre could not pick up LIBS options automatically? > > >> > > >> > > >> Thanks, > > >> > > >> Fande, > > >> > > >> > > >> > > >> > > >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < > > >> petsc-users at mcs.anl.gov> wrote: > > >> > > >>> Configure Options: --configModules=PETSc.Configure > > >>> --optionsModule=config.compilerOptions --download-hypre=1 > > >>> --with-debugging=no --with-shared-libraries=1 > --download-fblaslapack=1 > > >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1 > > >>> --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 > > >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git > > >>> --download-slepc-commit= 59ff81b --with-mpi=1 > --with-cxx-dialect=C++11 > > >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona > > >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong > -fno-plt -O2 > > >>> -ffunction-sections -pipe -isystem > > >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2 > > >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now > > >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections > > >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib > > >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib > > >>> -L/home/kongf/workhome/rod/miniconda3/lib > > >>> > AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar > > >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran > -lmpifort > > >>> > > >>> You are missing quotes with LIBS option - and likely the libraries in > > >>> the wrong order. > > >>> > > >>> Suggest using: > > >>> > > >>> LIBS="-lmpifort -lgfortran" > > >>> or > > >>> 'LIBS=-lmpifort -lgfortran' > > >>> > > >>> Assuming you are invoking configure from shell. > > >>> > > >>> Satish > > >>> > > >>> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > >>> > > >>> > to work around - you can try: > > >>> > > > >>> > LIBS="-lmpifort -lgfortran" > > >>> > > > >>> > Satish > > >>> > > > >>> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > >>> > > > >>> > > Its the same location as before. For some reason configure is not > > >>> saving the relevant logs. > > >>> > > > > >>> > > I don't understand saveLog() restoreLog() stuff. Matt, can you > check > > >>> on this? > > >>> > > > > >>> > > Satish > > >>> > > > > >>> > > On Sat, 14 Mar 2020, Fande Kong wrote: > > >>> > > > > >>> > > > The configuration crashed earlier than before with your > changes. > > >>> > > > > > >>> > > > Please see the attached log file when using your branch. The > > >>> trouble lines > > >>> > > > should be: > > >>> > > > > > >>> > > > " asub=self.mangleFortranFunction("asub") > > >>> > > > cbody = "extern void "+asub+"(void);\nint main(int > argc,char > > >>> > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > > >>> > > > " > > >>> > > > > > >>> > > > Thanks, > > >>> > > > > > >>> > > > Fande, > > >>> > > > > > >>> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay < > balay at mcs.anl.gov> > > >>> wrote: > > >>> > > > > > >>> > > > > I can't figure out what the stack in the attached > configure.log. > > >>> [likely > > >>> > > > > some stuff isn't getting logged in it] > > >>> > > > > > > >>> > > > > Can you retry with branch > > >>> 'balay/fix-checkFortranLibraries/maint'? > > >>> > > > > > > >>> > > > > Satish > > >>> > > > > > > >>> > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > >>> > > > > > > >>> > > > > > Thanks, Satish, > > >>> > > > > > > > >>> > > > > > But still have the problem. Please see the attached log > file. > > >>> > > > > > > > >>> > > > > > Thanks, > > >>> > > > > > > > >>> > > > > > Fande. > > >>> > > > > > > > >>> > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay < > > >>> balay at mcs.anl.gov> wrote: > > >>> > > > > > > > >>> > > > > > > Can you retry with the attached patch? > > >>> > > > > > > > > >>> > > > > > > BTW: Its best to use the latest patched version - i.e > > >>> > > > > petsc-3.12.4.tar.gz > > >>> > > > > > > > > >>> > > > > > > Satish > > >>> > > > > > > > > >>> > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > >>> > > > > > > > > >>> > > > > > > > This fixed the fblaslapack issue. Now have another > issue > > >>> about mumps. > > >>> > > > > > > > > > >>> > > > > > > > Please see the log file attached. > > >>> > > > > > > > > > >>> > > > > > > > Thanks, > > >>> > > > > > > > > > >>> > > > > > > > Fande, > > >>> > > > > > > > > > >>> > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < > > >>> balay at mcs.anl.gov> > > >>> > > > > wrote: > > >>> > > > > > > > > > >>> > > > > > > > > For some reason - the fortran compiler libraries > check > > >>> worked fine > > >>> > > > > > > without > > >>> > > > > > > > > -lgfortran. > > >>> > > > > > > > > > > >>> > > > > > > > > But now - flbaslapack check is failing without it. > > >>> > > > > > > > > > > >>> > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > >>> > > > > > > > > > > >>> > > > > > > > > Satish > > >>> > > > > > > > > > > >>> > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > >>> > > > > > > > > > > >>> > > > > > > > > > Hi All, > > >>> > > > > > > > > > > > >>> > > > > > > > > > I had an issue when configuring petsc on a linux > > >>> machine. I have > > >>> > > > > the > > >>> > > > > > > > > > following error message: > > >>> > > > > > > > > > > > >>> > > > > > > > > > Compiling FBLASLAPACK; this may take several > > >>> minutes > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > =============================================================================== > > >>> > > > > > > > > > > > >>> > > > > > > > > > TESTING: checkLib from > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > ******************************************************************************* > > >>> > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS > (see > > >>> > > > > > > configure.log for > > >>> > > > > > > > > > details): > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > ------------------------------------------------------------------------------- > > >>> > > > > > > > > > --download-fblaslapack libraries cannot be used > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > ******************************************************************************* > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > The configuration log was attached. > > >>> > > > > > > > > > > > >>> > > > > > > > > > Thanks, > > >>> > > > > > > > > > > > >>> > > > > > > > > > Fande, > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure_satish_branch_without_glibc.log Type: text/x-log Size: 158257 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure_maint_glibc.log Type: text/x-log Size: 1748015 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure_satish_branch_glibc.log Type: text/x-log Size: 3626620 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Mar 17 15:17:10 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 17 Mar 2020 15:17:10 -0500 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Thanks for the update. Hopefully Matt can check on the issue with missing stuff in configure.log. The MR is at https://gitlab.com/petsc/petsc/-/merge_requests/2606 Satish On Tue, 17 Mar 2020, Fande Kong wrote: > On Tue, Mar 17, 2020 at 9:24 AM Satish Balay wrote: > > > So what was the initial problem? Did conda install gcc without glibc? Or > > was it using the wrong glibc? > > > > Looks like GCC installed by conda uses an old version of glibc (2.12). > > > > Because the compiler appeared partly functional [well the build worked > > with just LIBS="-lmpifort -lgfortran"] > > > > And after the correct glibc was installed - did current maint still fail > > to build? > > > > Still failed because PETSc claimed that: there were no needed fortran > libraries when using mpicc as the linker. But in fact, we need these > fortran stuffs when linking blaslapack and mumps. > > > > > > Can you send configure.log for this? > > > > And its not clear to me why balay/fix-checkFortranLibraries/maint broke > > before this fix. [for one configure.log was incomplete] > > > > I am not 100% sure, but I think the complied and linked executable can not > run because of "glibc_2.14' not found". The version of glibc was too low. > > > So current solution for me is that: your branch + a new version of glibc > (2.18). > > Thanks, > > Fande, > > > > > > > Satish > > > > On Tue, 17 Mar 2020, Fande Kong wrote: > > > > > Hi Satish, > > > > > > Could you merge your branch, balay/fix-checkFortranLibraries/maint, into > > > maint? > > > > > > I added glibc to my conda environment (conda install -c dan_blanchard > > > glibc), and your branch ran well. > > > > > > If you are interested, I attached the successful log file here. > > > > > > Thanks, > > > > > > Fande > > > > > > On Sat, Mar 14, 2020 at 5:01 PM Fande Kong wrote: > > > > > > > Without touching the configuration file, the > > > > option: --download-hypre-configure-arguments='LIBS="-lmpifort > > -lgfortran"', > > > > also works. > > > > > > > > > > > > Thanks, Satish, > > > > > > > > > > > > Fande, > > > > > > > > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong > > wrote: > > > > > > > >> OK. I finally got PETSc complied. > > > >> > > > >> "-lgfortran" was required by fblaslapack > > > >> "-lmpifort" was required by mumps. > > > >> > > > >> However, I had to manually add the same thing for hypre as well: > > > >> > > > >> git diff > > > >> diff --git a/config/BuildSystem/config/packages/hypre.py > > > >> b/config/BuildSystem/config/packages/hypre.py > > > >> index 4d915c312f..f4300230a6 100644 > > > >> --- a/config/BuildSystem/config/packages/hypre.py > > > >> +++ b/config/BuildSystem/config/packages/hypre.py > > > >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): > > > >> args.append('--with-lapack-lib=" "') > > > >> args.append('--with-blas=no') > > > >> args.append('--with-lapack=no') > > > >> + args.append('LIBS="-lmpifort -lgfortran"') > > > >> if self.openmp.found: > > > >> args.append('--with-openmp') > > > >> self.usesopenmp = 'yes' > > > >> > > > >> > > > >> Why hypre could not pick up LIBS options automatically? > > > >> > > > >> > > > >> Thanks, > > > >> > > > >> Fande, > > > >> > > > >> > > > >> > > > >> > > > >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < > > > >> petsc-users at mcs.anl.gov> wrote: > > > >> > > > >>> Configure Options: --configModules=PETSc.Configure > > > >>> --optionsModule=config.compilerOptions --download-hypre=1 > > > >>> --with-debugging=no --with-shared-libraries=1 > > --download-fblaslapack=1 > > > >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1 > > > >>> --download-superlu_dist=1 --download-mumps=1 --download-scalapack=1 > > > >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git > > > >>> --download-slepc-commit= 59ff81b --with-mpi=1 > > --with-cxx-dialect=C++11 > > > >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona > > > >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong > > -fno-plt -O2 > > > >>> -ffunction-sections -pipe -isystem > > > >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= LDFLAGS=-Wl,-O2 > > > >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now > > > >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections > > > >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib > > > >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib > > > >>> -L/home/kongf/workhome/rod/miniconda3/lib > > > >>> > > AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar > > > >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran > > -lmpifort > > > >>> > > > >>> You are missing quotes with LIBS option - and likely the libraries in > > > >>> the wrong order. > > > >>> > > > >>> Suggest using: > > > >>> > > > >>> LIBS="-lmpifort -lgfortran" > > > >>> or > > > >>> 'LIBS=-lmpifort -lgfortran' > > > >>> > > > >>> Assuming you are invoking configure from shell. > > > >>> > > > >>> Satish > > > >>> > > > >>> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > >>> > > > >>> > to work around - you can try: > > > >>> > > > > >>> > LIBS="-lmpifort -lgfortran" > > > >>> > > > > >>> > Satish > > > >>> > > > > >>> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > >>> > > > > >>> > > Its the same location as before. For some reason configure is not > > > >>> saving the relevant logs. > > > >>> > > > > > >>> > > I don't understand saveLog() restoreLog() stuff. Matt, can you > > check > > > >>> on this? > > > >>> > > > > > >>> > > Satish > > > >>> > > > > > >>> > > On Sat, 14 Mar 2020, Fande Kong wrote: > > > >>> > > > > > >>> > > > The configuration crashed earlier than before with your > > changes. > > > >>> > > > > > > >>> > > > Please see the attached log file when using your branch. The > > > >>> trouble lines > > > >>> > > > should be: > > > >>> > > > > > > >>> > > > " asub=self.mangleFortranFunction("asub") > > > >>> > > > cbody = "extern void "+asub+"(void);\nint main(int > > argc,char > > > >>> > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > > > >>> > > > " > > > >>> > > > > > > >>> > > > Thanks, > > > >>> > > > > > > >>> > > > Fande, > > > >>> > > > > > > >>> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay < > > balay at mcs.anl.gov> > > > >>> wrote: > > > >>> > > > > > > >>> > > > > I can't figure out what the stack in the attached > > configure.log. > > > >>> [likely > > > >>> > > > > some stuff isn't getting logged in it] > > > >>> > > > > > > > >>> > > > > Can you retry with branch > > > >>> 'balay/fix-checkFortranLibraries/maint'? > > > >>> > > > > > > > >>> > > > > Satish > > > >>> > > > > > > > >>> > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > >>> > > > > > > > >>> > > > > > Thanks, Satish, > > > >>> > > > > > > > > >>> > > > > > But still have the problem. Please see the attached log > > file. > > > >>> > > > > > > > > >>> > > > > > Thanks, > > > >>> > > > > > > > > >>> > > > > > Fande. > > > >>> > > > > > > > > >>> > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay < > > > >>> balay at mcs.anl.gov> wrote: > > > >>> > > > > > > > > >>> > > > > > > Can you retry with the attached patch? > > > >>> > > > > > > > > > >>> > > > > > > BTW: Its best to use the latest patched version - i.e > > > >>> > > > > petsc-3.12.4.tar.gz > > > >>> > > > > > > > > > >>> > > > > > > Satish > > > >>> > > > > > > > > > >>> > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > >>> > > > > > > > > > >>> > > > > > > > This fixed the fblaslapack issue. Now have another > > issue > > > >>> about mumps. > > > >>> > > > > > > > > > > >>> > > > > > > > Please see the log file attached. > > > >>> > > > > > > > > > > >>> > > > > > > > Thanks, > > > >>> > > > > > > > > > > >>> > > > > > > > Fande, > > > >>> > > > > > > > > > > >>> > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < > > > >>> balay at mcs.anl.gov> > > > >>> > > > > wrote: > > > >>> > > > > > > > > > > >>> > > > > > > > > For some reason - the fortran compiler libraries > > check > > > >>> worked fine > > > >>> > > > > > > without > > > >>> > > > > > > > > -lgfortran. > > > >>> > > > > > > > > > > > >>> > > > > > > > > But now - flbaslapack check is failing without it. > > > >>> > > > > > > > > > > > >>> > > > > > > > > To work arround - you can use option LIBS=-lgfortran > > > >>> > > > > > > > > > > > >>> > > > > > > > > Satish > > > >>> > > > > > > > > > > > >>> > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > >>> > > > > > > > > > > > >>> > > > > > > > > > Hi All, > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > I had an issue when configuring petsc on a linux > > > >>> machine. I have > > > >>> > > > > the > > > >>> > > > > > > > > > following error message: > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > Compiling FBLASLAPACK; this may take several > > > >>> minutes > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > =============================================================================== > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > TESTING: checkLib from > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > ******************************************************************************* > > > >>> > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS > > (see > > > >>> > > > > > > configure.log for > > > >>> > > > > > > > > > details): > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > ------------------------------------------------------------------------------- > > > >>> > > > > > > > > > --download-fblaslapack libraries cannot be used > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > ******************************************************************************* > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > The configuration log was attached. > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > Thanks, > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > Fande, > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >> > > > > > > > > From hongzhang at anl.gov Tue Mar 17 16:09:36 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 17 Mar 2020 21:09:36 +0000 Subject: [petsc-users] TS generating inconsistent data In-Reply-To: References: Message-ID: <6E8D41A5-5956-4E4F-B891-2D6737D9DB49@anl.gov> Zane, Stefano?s suggestion should have fixed your code. I just want to let you know that it is not a requirement to call TSResetTrajectory() and there are other ways to fix your problem. Based on my experience, you might have called TSSaveTrajectory() in an unnecessary place, for example, TSSaveTrajectory() TSSolve() /* generate a reference solution or for some other purpose */ { /* optimization loop */ TSSolve() /* forward run for sensitivity calculation */ TSAdjointSolve() /* backward run for sensitivity calculation */ } Here, the first call to TSSolve() actually does not need the trajectory data. You can easily fix the ?inconsistent data? error by doing TSSolve() /* generate a reference solution or for some other purpose */ TSSaveTrajectory() { /* optimization loop */ TSSolve() /* forward run for sensitivity calculation */ TSAdjointSolve() /* backward run for sensitivity calculation */ } Alternatively, you can use the command line option -ts_trajectory_use_history 0 (you need to call TSSetFromOptions() after TSSaveTrajectory() to receive the TSTrajectory options). Hong (Mr.) On Mar 14, 2020, at 10:54 AM, Zane Charles Jakobs > wrote: Hi PETSc devs, I have some code that implements (essentially) 4D-VAR with PETSc, and the results of both my forward and adjoint integrations look correct to me (i.e. calling TSSolve() and then TSAdjointSolve() works correctly as far as I can tell). However, when I try to use a TaoSolve() to optimize my initial condition, I get this error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: History id should be unique [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12 GIT Date: 2020-02-21 16:53:25 -0600 [0]PETSC ERROR: ./var_ic_test on a arch-linux2-c-debug named DiffeoInvariant by diffeoinvariant Sat Mar 14 09:39:05 2020 [0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native -fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 --with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native -fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich [0]PETSC ERROR: #1 TSHistoryUpdate() line 82 in /usr/local/petsc/src/ts/interface/tshistory.c [0]PETSC ERROR: #2 TSTrajectorySet() line 73 in /usr/local/petsc/src/ts/trajectory/interface/traj.c [0]PETSC ERROR: #3 TSSolve() line 4005 in /usr/local/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #4 MixedModelFormVARICFunctionGradient() line 301 in mixed.c [0]PETSC ERROR: #5 TaoComputeObjectiveAndGradient() line 261 in /usr/local/petsc/src/tao/interface/taosolver_fg.c [0]PETSC ERROR: #6 TaoSolve_LMVM() line 23 in /usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c [0]PETSC ERROR: #7 TaoSolve() line 219 in /usr/local/petsc/src/tao/interface/taosolver.c [0]PETSC ERROR: #8 MixedModelOptimize() line 639 in mixed.c [0]PETSC ERROR: #9 MixedModelOptimizeInitialCondition() line 648 in mixed.c [0]PETSC ERROR: #10 main() line 76 in var_ic_test.c [0]PETSC ERROR: No PETSc Option Table entries [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- In the function MixedModelFormVARICFunctionGradient(), I do ierr = TSSetTime(model->ts, 0.0);CHKERRQ(ierr); ierr = TSSetStepNumber(model->ts, 0);CHKERRQ(ierr); ierr = TSSetFromOptions(model->ts);CHKERRQ(ierr); ierr = TSSetMaxTime(model->ts, model->obs->t);CHKERRQ(ierr); ierr = TSSolve(model->ts, model->X);CHKERRQ(ierr); ... [allocating and setting cost gradient vec] ierr = TSSetCostGradients(model->ts, 1, model->lambda, NULL);CHKERRQ(ierr); ierr = TSAdjointSolve(model->ts);CHKERRQ(ierr); ierr = VecCopy(model->lambda[0], G);CHKERRQ(ierr); What might be causing the above error? Am I using a deprecated version of the Tao interface? (I'm using TaoSetObjectiveAndGradientRoutine, as done in ex20_opt_ic.c) Thanks! -Zane Jakobs -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Wed Mar 18 11:58:29 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Wed, 18 Mar 2020 17:58:29 +0100 Subject: [petsc-users] node DG with DMPlex In-Reply-To: References: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> Message-ID: <0526eb34-b4ce-19c4-4f76-81d2cd41cd45@univ-amu.fr> Hi matt, Le 3/17/2020 ? 4:00 PM, Matthew Knepley a ?crit?: > On Mon, Mar 16, 2020 at 5:20 PM Yann Jobic > wrote: > > Hi all, > > I would like to implement a nodal DG with the DMPlex interface. > Therefore, i must add the internal nodes to the DM (GLL nodes), with > the > constrains : > 1) Add them as solution points, with correct coordinates (and keep the > good rotational ordering) > 2) Find the shared nodes at faces in order to compute the fluxes > 3) For parallel use, so synchronize the ghost node at each time steps > > > Let me get the fundamentals straight before advising, since I have never > implemented nodal DG. > > ? 1) What is shared? I need to duplicate an edge in 2D, or a facet in 3D, and to sync it after a time step, in order to compute the numerical fluxes (Lax-Friedrichs at the beginning). > > ? ? ? We have an implementation of spectral element ordering > (https://gitlab.com/petsc/petsc/-/blob/master/src/dm/impls/plex/examples/tutorials/ex6.c). > Those share > ? ? ? the whole element boundary. > > ? 2) What ghosts do you need? In order to compute the numerical fluxes of one element, i need the values of the surrounding nodes connected to the adjacent elements. > > ? 3) You want to store real space coordinates for a quadrature? It should be basically the same as PetscFE of higher order. I add some vertex needed to compute a polynomal solution of the desired order. That means that if i have a N, order of the local approximation, i need 0.5*(N+1)*(N+2) vertex to store in the DMPlex (in 2D), in order to : 1) have the correct number of dof 2) use ghost nodes to sync the values of the vertex/edge/facet for 1D/2D/3D problem 2) save correctly the solution Does it make sense to you ? Maybe like https://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ex11.c.html With the use of the function SplitFaces, which i didn't fully understood so far. Thanks, Yann > > ? ? ? We usually define a quadrature on the reference element once. > > ? Thanks, > > ? ? Matt > > I found elements of answers in those threads : > https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html > > However, it's not clear for me where to begin. > > Quoting Matt, i should : > "? DMGetCoordinateDM(dm, &cdm); > ? ? > ? DMCreateLocalVector(cdm, &coordinatesLocal); > ? > ? DMSetCoordinatesLocal(dm, coordinatesLocal);" > > However, i will not create ghost nodes this way. And i'm not sure to > keep the good ordering. > This part should be implemented in the PetscFE interface, for high > order > discrete solutions. > I did not succeed in finding the correct part of the source doing it. > > Could you please give me some hint to begin correctly thoses tasks ? > > Thanks, > > Yann > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From xliu29 at ncsu.edu Wed Mar 18 12:18:27 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Wed, 18 Mar 2020 10:18:27 -0700 Subject: [petsc-users] Does Petsc provide the non-KSP iterative solver, e.g., SOR, for a linear system. Message-ID: Hi, Petsc team, I am doing something using multigrid as a preconditioner for GMRES. From the official case, most use KSP (Chebyshev) and PC (SOR) for solving the preconditioning system for every level of multigrid except the coarse one. -ksp_type gmres -pc_type mg -pc_mg_levels 3 -mg_levels_ksp_type Chebyshev -mg_levels_pc_type SOR I would like to use SOR for solving the preconditioning system directly. I can not find that Petsc supports non-KSP iterative solver for a linear system, right? If I am wrong, please let me know. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Wed Mar 18 12:33:52 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Wed, 18 Mar 2020 18:33:52 +0100 Subject: [petsc-users] Does Petsc provide the non-KSP iterative solver, e.g., SOR, for a linear system. In-Reply-To: References: Message-ID: I may not be interpreting the question correctly, but you can use SOR as the only preconditioner, e.g. -ksp_type gmres -pc_type sor or you can use Richardson/SOR as your multigrid smoother, which might be what you're asking for -ksp_type gmres -pc_type mg -pc_mg_levels 3 -mg_levels_ksp_type richardson -mg_levels_pc_type SOR Am Mi., 18. M?rz 2020 um 18:19 Uhr schrieb Xiaodong Liu : > Hi, Petsc team, > > I am doing something using multigrid as a preconditioner for GMRES. From > the official case, > > most use KSP (Chebyshev) and PC (SOR) for solving the preconditioning > system for every level of multigrid except the coarse one. > > -ksp_type gmres -pc_type mg -pc_mg_levels 3 > -mg_levels_ksp_type Chebyshev -mg_levels_pc_type SOR > > I would like to use SOR for solving the preconditioning system directly. > I can not find that Petsc supports non-KSP iterative solver for a linear > system, right? > > If I am wrong, please let me know. > > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Wed Mar 18 12:40:14 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Wed, 18 Mar 2020 10:40:14 -0700 Subject: [petsc-users] Does Petsc provide the non-KSP iterative solver, e.g., SOR, for a linear system. In-Reply-To: References: Message-ID: Thanks, Patrick. I will try your suggestion Richardson/SOR. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Wed, Mar 18, 2020 at 10:34 AM Patrick Sanan wrote: > I may not be interpreting the question correctly, but you can use SOR as > the only preconditioner, e.g. > > -ksp_type gmres -pc_type sor > > or you can use Richardson/SOR as your multigrid smoother, which might be > what you're asking for > > -ksp_type gmres -pc_type mg -pc_mg_levels 3 > -mg_levels_ksp_type richardson -mg_levels_pc_type SOR > > > > Am Mi., 18. M?rz 2020 um 18:19 Uhr schrieb Xiaodong Liu : > >> Hi, Petsc team, >> >> I am doing something using multigrid as a preconditioner for GMRES. From >> the official case, >> >> most use KSP (Chebyshev) and PC (SOR) for solving the preconditioning >> system for every level of multigrid except the coarse one. >> >> -ksp_type gmres -pc_type mg -pc_mg_levels 3 >> -mg_levels_ksp_type Chebyshev -mg_levels_pc_type SOR >> >> I would like to use SOR for solving the preconditioning system >> directly. I can not find that Petsc supports non-KSP iterative solver for a >> linear system, right? >> >> If I am wrong, please let me know. >> >> Xiaodong Liu, PhD >> X: Computational Physics Division >> Los Alamos National Laboratory >> P.O. Box 1663, >> Los Alamos, NM 87544 >> 505-709-0534 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Wed Mar 18 16:35:57 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 18 Mar 2020 15:35:57 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: Thanks, Satish, I keep investigating into this issue. Now, I have more insights. The fundamental reason is that: Conda-compilers (installed by: conda install -c conda-forge compilers) have a bunch of system libs in */sysroot/lib and */sysroot/usr/lib. Most of them are related to glibc. These libs may or may not be compatible to the OS system you are using. PETSc will find these libs, and think they are just regular user libs, and then hard code with "-rpath". If I make some changes to ignore sysroot/lib*, and then everything runs smoothly ( do not need to install glibc because OS will have a right one). git diff diff --git a/config/BuildSystem/config/compilers.py b/config/BuildSystem/config/compilers.py index 5367383141..6b26594b4e 100644 --- a/config/BuildSystem/config/compilers.py +++ b/config/BuildSystem/config/compilers.py @@ -1132,6 +1132,7 @@ Otherwise you need a different combination of C, C++, and Fortran compilers") if m: arg = '-L'+os.path.abspath(arg[2:]) if arg in ['-L/usr/lib','-L/lib','-L/usr/lib64','-L/lib64']: continue + if 'sysroot/usr/lib' in arg or 'sysroot/lib' in arg: continue if not arg in lflags: lflags.append(arg) self.logPrint('Found library directory: '+arg, 4, 'compilers') PETSc should treat these libs as system-level libs??? Have a branch here: Fande-Kong/skip_sysroot_libs_maint Any suggestion are appreciated, Thanks, Fande, On Tue, Mar 17, 2020 at 2:17 PM Satish Balay wrote: > Thanks for the update. > > Hopefully Matt can check on the issue with missing stuff in configure.log. > > The MR is at https://gitlab.com/petsc/petsc/-/merge_requests/2606 > > Satish > > > On Tue, 17 Mar 2020, Fande Kong wrote: > > > On Tue, Mar 17, 2020 at 9:24 AM Satish Balay wrote: > > > > > So what was the initial problem? Did conda install gcc without glibc? > Or > > > was it using the wrong glibc? > > > > > > > Looks like GCC installed by conda uses an old version of glibc (2.12). > > > > > > > Because the compiler appeared partly functional [well the build worked > > > with just LIBS="-lmpifort -lgfortran"] > > > > > > And after the correct glibc was installed - did current maint still > fail > > > to build? > > > > > > > Still failed because PETSc claimed that: there were no needed fortran > > libraries when using mpicc as the linker. But in fact, we need these > > fortran stuffs when linking blaslapack and mumps. > > > > > > > > > > Can you send configure.log for this? > > > > > > And its not clear to me why balay/fix-checkFortranLibraries/maint broke > > > before this fix. [for one configure.log was incomplete] > > > > > > > I am not 100% sure, but I think the complied and linked executable can > not > > run because of "glibc_2.14' not found". The version of glibc was too > low. > > > > > > So current solution for me is that: your branch + a new version of glibc > > (2.18). > > > > Thanks, > > > > Fande, > > > > > > > > > > > > Satish > > > > > > On Tue, 17 Mar 2020, Fande Kong wrote: > > > > > > > Hi Satish, > > > > > > > > Could you merge your branch, balay/fix-checkFortranLibraries/maint, > into > > > > maint? > > > > > > > > I added glibc to my conda environment (conda install -c dan_blanchard > > > > glibc), and your branch ran well. > > > > > > > > If you are interested, I attached the successful log file here. > > > > > > > > Thanks, > > > > > > > > Fande > > > > > > > > On Sat, Mar 14, 2020 at 5:01 PM Fande Kong > wrote: > > > > > > > > > Without touching the configuration file, the > > > > > option: --download-hypre-configure-arguments='LIBS="-lmpifort > > > -lgfortran"', > > > > > also works. > > > > > > > > > > > > > > > Thanks, Satish, > > > > > > > > > > > > > > > Fande, > > > > > > > > > > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong > > > wrote: > > > > > > > > > >> OK. I finally got PETSc complied. > > > > >> > > > > >> "-lgfortran" was required by fblaslapack > > > > >> "-lmpifort" was required by mumps. > > > > >> > > > > >> However, I had to manually add the same thing for hypre as well: > > > > >> > > > > >> git diff > > > > >> diff --git a/config/BuildSystem/config/packages/hypre.py > > > > >> b/config/BuildSystem/config/packages/hypre.py > > > > >> index 4d915c312f..f4300230a6 100644 > > > > >> --- a/config/BuildSystem/config/packages/hypre.py > > > > >> +++ b/config/BuildSystem/config/packages/hypre.py > > > > >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): > > > > >> args.append('--with-lapack-lib=" "') > > > > >> args.append('--with-blas=no') > > > > >> args.append('--with-lapack=no') > > > > >> + args.append('LIBS="-lmpifort -lgfortran"') > > > > >> if self.openmp.found: > > > > >> args.append('--with-openmp') > > > > >> self.usesopenmp = 'yes' > > > > >> > > > > >> > > > > >> Why hypre could not pick up LIBS options automatically? > > > > >> > > > > >> > > > > >> Thanks, > > > > >> > > > > >> Fande, > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < > > > > >> petsc-users at mcs.anl.gov> wrote: > > > > >> > > > > >>> Configure Options: --configModules=PETSc.Configure > > > > >>> --optionsModule=config.compilerOptions --download-hypre=1 > > > > >>> --with-debugging=no --with-shared-libraries=1 > > > --download-fblaslapack=1 > > > > >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1 > > > > >>> --download-superlu_dist=1 --download-mumps=1 > --download-scalapack=1 > > > > >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git > > > > >>> --download-slepc-commit= 59ff81b --with-mpi=1 > > > --with-cxx-dialect=C++11 > > > > >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona > > > > >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong > > > -fno-plt -O2 > > > > >>> -ffunction-sections -pipe -isystem > > > > >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= > LDFLAGS=-Wl,-O2 > > > > >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now > > > > >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections > > > > >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib > > > > >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib > > > > >>> -L/home/kongf/workhome/rod/miniconda3/lib > > > > >>> > > > > AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar > > > > >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran > > > -lmpifort > > > > >>> > > > > >>> You are missing quotes with LIBS option - and likely the > libraries in > > > > >>> the wrong order. > > > > >>> > > > > >>> Suggest using: > > > > >>> > > > > >>> LIBS="-lmpifort -lgfortran" > > > > >>> or > > > > >>> 'LIBS=-lmpifort -lgfortran' > > > > >>> > > > > >>> Assuming you are invoking configure from shell. > > > > >>> > > > > >>> Satish > > > > >>> > > > > >>> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > > >>> > > > > >>> > to work around - you can try: > > > > >>> > > > > > >>> > LIBS="-lmpifort -lgfortran" > > > > >>> > > > > > >>> > Satish > > > > >>> > > > > > >>> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: > > > > >>> > > > > > >>> > > Its the same location as before. For some reason configure > is not > > > > >>> saving the relevant logs. > > > > >>> > > > > > > >>> > > I don't understand saveLog() restoreLog() stuff. Matt, can > you > > > check > > > > >>> on this? > > > > >>> > > > > > > >>> > > Satish > > > > >>> > > > > > > >>> > > On Sat, 14 Mar 2020, Fande Kong wrote: > > > > >>> > > > > > > >>> > > > The configuration crashed earlier than before with your > > > changes. > > > > >>> > > > > > > > >>> > > > Please see the attached log file when using your branch. > The > > > > >>> trouble lines > > > > >>> > > > should be: > > > > >>> > > > > > > > >>> > > > " asub=self.mangleFortranFunction("asub") > > > > >>> > > > cbody = "extern void "+asub+"(void);\nint main(int > > > argc,char > > > > >>> > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; > > > > >>> > > > " > > > > >>> > > > > > > > >>> > > > Thanks, > > > > >>> > > > > > > > >>> > > > Fande, > > > > >>> > > > > > > > >>> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay < > > > balay at mcs.anl.gov> > > > > >>> wrote: > > > > >>> > > > > > > > >>> > > > > I can't figure out what the stack in the attached > > > configure.log. > > > > >>> [likely > > > > >>> > > > > some stuff isn't getting logged in it] > > > > >>> > > > > > > > > >>> > > > > Can you retry with branch > > > > >>> 'balay/fix-checkFortranLibraries/maint'? > > > > >>> > > > > > > > > >>> > > > > Satish > > > > >>> > > > > > > > > >>> > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > >>> > > > > > > > > >>> > > > > > Thanks, Satish, > > > > >>> > > > > > > > > > >>> > > > > > But still have the problem. Please see the attached log > > > file. > > > > >>> > > > > > > > > > >>> > > > > > Thanks, > > > > >>> > > > > > > > > > >>> > > > > > Fande. > > > > >>> > > > > > > > > > >>> > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay < > > > > >>> balay at mcs.anl.gov> wrote: > > > > >>> > > > > > > > > > >>> > > > > > > Can you retry with the attached patch? > > > > >>> > > > > > > > > > > >>> > > > > > > BTW: Its best to use the latest patched version - i.e > > > > >>> > > > > petsc-3.12.4.tar.gz > > > > >>> > > > > > > > > > > >>> > > > > > > Satish > > > > >>> > > > > > > > > > > >>> > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > >>> > > > > > > > > > > >>> > > > > > > > This fixed the fblaslapack issue. Now have another > > > issue > > > > >>> about mumps. > > > > >>> > > > > > > > > > > > >>> > > > > > > > Please see the log file attached. > > > > >>> > > > > > > > > > > > >>> > > > > > > > Thanks, > > > > >>> > > > > > > > > > > > >>> > > > > > > > Fande, > > > > >>> > > > > > > > > > > > >>> > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < > > > > >>> balay at mcs.anl.gov> > > > > >>> > > > > wrote: > > > > >>> > > > > > > > > > > > >>> > > > > > > > > For some reason - the fortran compiler libraries > > > check > > > > >>> worked fine > > > > >>> > > > > > > without > > > > >>> > > > > > > > > -lgfortran. > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > But now - flbaslapack check is failing without > it. > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > To work arround - you can use option > LIBS=-lgfortran > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > Satish > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > Hi All, > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > I had an issue when configuring petsc on a > linux > > > > >>> machine. I have > > > > >>> > > > > the > > > > >>> > > > > > > > > > following error message: > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > Compiling FBLASLAPACK; this may take > several > > > > >>> minutes > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > =============================================================================== > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > TESTING: checkLib from > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > ******************************************************************************* > > > > >>> > > > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS > > > (see > > > > >>> > > > > > > configure.log for > > > > >>> > > > > > > > > > details): > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > ------------------------------------------------------------------------------- > > > > >>> > > > > > > > > > --download-fblaslapack libraries cannot be used > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > ******************************************************************************* > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > The configuration log was attached. > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > Thanks, > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > Fande, > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >> > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Wed Mar 18 16:46:52 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 18 Mar 2020 15:46:52 -0600 Subject: [petsc-users] --download-fblaslapack libraries cannot be used In-Reply-To: References: Message-ID: A PR here https://gitlab.com/petsc/petsc/-/merge_requests/2612 On Wed, Mar 18, 2020 at 3:35 PM Fande Kong wrote: > Thanks, Satish, > > I keep investigating into this issue. Now, I have more insights. The > fundamental reason is that: Conda-compilers (installed by: conda install -c > conda-forge compilers) have a bunch of system libs in */sysroot/lib and > */sysroot/usr/lib. Most of them are related to glibc. These libs may or may > not be compatible to the OS system you are using. > > PETSc will find these libs, and think they are just regular user libs, and > then hard code with "-rpath". > > If I make some changes to ignore sysroot/lib*, and then everything > runs smoothly ( do not need to install glibc because OS will have a right > one). > > git diff > diff --git a/config/BuildSystem/config/compilers.py > b/config/BuildSystem/config/compilers.py > index 5367383141..6b26594b4e 100644 > --- a/config/BuildSystem/config/compilers.py > +++ b/config/BuildSystem/config/compilers.py > @@ -1132,6 +1132,7 @@ Otherwise you need a different combination of C, > C++, and Fortran compilers") > if m: > arg = '-L'+os.path.abspath(arg[2:]) > if arg in ['-L/usr/lib','-L/lib','-L/usr/lib64','-L/lib64']: > continue > + if 'sysroot/usr/lib' in arg or 'sysroot/lib' in arg: continue > if not arg in lflags: > lflags.append(arg) > self.logPrint('Found library directory: '+arg, 4, 'compilers') > > > PETSc should treat these libs as system-level libs??? > > Have a branch here: Fande-Kong/skip_sysroot_libs_maint > > Any suggestion are appreciated, > > Thanks, > > Fande, > > > On Tue, Mar 17, 2020 at 2:17 PM Satish Balay wrote: > >> Thanks for the update. >> >> Hopefully Matt can check on the issue with missing stuff in configure.log. >> >> The MR is at https://gitlab.com/petsc/petsc/-/merge_requests/2606 >> >> Satish >> >> >> On Tue, 17 Mar 2020, Fande Kong wrote: >> >> > On Tue, Mar 17, 2020 at 9:24 AM Satish Balay wrote: >> > >> > > So what was the initial problem? Did conda install gcc without glibc? >> Or >> > > was it using the wrong glibc? >> > > >> > >> > Looks like GCC installed by conda uses an old version of glibc (2.12). >> > >> > >> > > Because the compiler appeared partly functional [well the build worked >> > > with just LIBS="-lmpifort -lgfortran"] >> > > >> > > And after the correct glibc was installed - did current maint still >> fail >> > > to build? >> > > >> > >> > Still failed because PETSc claimed that: there were no needed fortran >> > libraries when using mpicc as the linker. But in fact, we need these >> > fortran stuffs when linking blaslapack and mumps. >> > >> > >> > > >> > > Can you send configure.log for this? >> > > >> > > And its not clear to me why balay/fix-checkFortranLibraries/maint >> broke >> > > before this fix. [for one configure.log was incomplete] >> > > >> > >> > I am not 100% sure, but I think the complied and linked executable can >> not >> > run because of "glibc_2.14' not found". The version of glibc was too >> low. >> > >> > >> > So current solution for me is that: your branch + a new version of glibc >> > (2.18). >> > >> > Thanks, >> > >> > Fande, >> > >> > >> > >> > > >> > > Satish >> > > >> > > On Tue, 17 Mar 2020, Fande Kong wrote: >> > > >> > > > Hi Satish, >> > > > >> > > > Could you merge your branch, balay/fix-checkFortranLibraries/maint, >> into >> > > > maint? >> > > > >> > > > I added glibc to my conda environment (conda install -c >> dan_blanchard >> > > > glibc), and your branch ran well. >> > > > >> > > > If you are interested, I attached the successful log file here. >> > > > >> > > > Thanks, >> > > > >> > > > Fande >> > > > >> > > > On Sat, Mar 14, 2020 at 5:01 PM Fande Kong >> wrote: >> > > > >> > > > > Without touching the configuration file, the >> > > > > option: --download-hypre-configure-arguments='LIBS="-lmpifort >> > > -lgfortran"', >> > > > > also works. >> > > > > >> > > > > >> > > > > Thanks, Satish, >> > > > > >> > > > > >> > > > > Fande, >> > > > > >> > > > > On Sat, Mar 14, 2020 at 4:37 PM Fande Kong >> > > wrote: >> > > > > >> > > > >> OK. I finally got PETSc complied. >> > > > >> >> > > > >> "-lgfortran" was required by fblaslapack >> > > > >> "-lmpifort" was required by mumps. >> > > > >> >> > > > >> However, I had to manually add the same thing for hypre as well: >> > > > >> >> > > > >> git diff >> > > > >> diff --git a/config/BuildSystem/config/packages/hypre.py >> > > > >> b/config/BuildSystem/config/packages/hypre.py >> > > > >> index 4d915c312f..f4300230a6 100644 >> > > > >> --- a/config/BuildSystem/config/packages/hypre.py >> > > > >> +++ b/config/BuildSystem/config/packages/hypre.py >> > > > >> @@ -66,6 +66,7 @@ class Configure(config.package.GNUPackage): >> > > > >> args.append('--with-lapack-lib=" "') >> > > > >> args.append('--with-blas=no') >> > > > >> args.append('--with-lapack=no') >> > > > >> + args.append('LIBS="-lmpifort -lgfortran"') >> > > > >> if self.openmp.found: >> > > > >> args.append('--with-openmp') >> > > > >> self.usesopenmp = 'yes' >> > > > >> >> > > > >> >> > > > >> Why hypre could not pick up LIBS options automatically? >> > > > >> >> > > > >> >> > > > >> Thanks, >> > > > >> >> > > > >> Fande, >> > > > >> >> > > > >> >> > > > >> >> > > > >> >> > > > >> On Sat, Mar 14, 2020 at 2:49 PM Satish Balay via petsc-users < >> > > > >> petsc-users at mcs.anl.gov> wrote: >> > > > >> >> > > > >>> Configure Options: --configModules=PETSc.Configure >> > > > >>> --optionsModule=config.compilerOptions --download-hypre=1 >> > > > >>> --with-debugging=no --with-shared-libraries=1 >> > > --download-fblaslapack=1 >> > > > >>> --download-metis=1 --download-ptscotch=1 --download-parmetis=1 >> > > > >>> --download-superlu_dist=1 --download-mumps=1 >> --download-scalapack=1 >> > > > >>> --download-slepc=git://https://gitlab.com/slepc/slepc.git >> > > > >>> --download-slepc-commit= 59ff81b --with-mpi=1 >> > > --with-cxx-dialect=C++11 >> > > > >>> --with-fortran-bindings=0 --with-sowing=0 CFLAGS=-march=nocona >> > > > >>> -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong >> > > -fno-plt -O2 >> > > > >>> -ffunction-sections -pipe -isystem >> > > > >>> /home/kongf/workhome/rod/miniconda3/include CXXFLAGS= >> LDFLAGS=-Wl,-O2 >> > > > >>> -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now >> > > > >>> -Wl,--with-new-dtags=0 -Wl,--gc-sections >> > > > >>> -Wl,-rpath,/home/kongf/workhome/rod/miniconda3/lib >> > > > >>> -Wl,-rpath-link,/home/kongf/workhome/rod/miniconda3/lib >> > > > >>> -L/home/kongf/workhome/rod/miniconda3/lib >> > > > >>> >> > > >> AR=/home/kongf/workhome/rod/miniconda3/bin/x86_64-conda_cos6-linux-gnu-ar >> > > > >>> --with-mpi-dir=/home/kongf/workhome/rod/mpich LIBS=-lgfortran >> > > -lmpifort >> > > > >>> >> > > > >>> You are missing quotes with LIBS option - and likely the >> libraries in >> > > > >>> the wrong order. >> > > > >>> >> > > > >>> Suggest using: >> > > > >>> >> > > > >>> LIBS="-lmpifort -lgfortran" >> > > > >>> or >> > > > >>> 'LIBS=-lmpifort -lgfortran' >> > > > >>> >> > > > >>> Assuming you are invoking configure from shell. >> > > > >>> >> > > > >>> Satish >> > > > >>> >> > > > >>> On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: >> > > > >>> >> > > > >>> > to work around - you can try: >> > > > >>> > >> > > > >>> > LIBS="-lmpifort -lgfortran" >> > > > >>> > >> > > > >>> > Satish >> > > > >>> > >> > > > >>> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote: >> > > > >>> > >> > > > >>> > > Its the same location as before. For some reason configure >> is not >> > > > >>> saving the relevant logs. >> > > > >>> > > >> > > > >>> > > I don't understand saveLog() restoreLog() stuff. Matt, can >> you >> > > check >> > > > >>> on this? >> > > > >>> > > >> > > > >>> > > Satish >> > > > >>> > > >> > > > >>> > > On Sat, 14 Mar 2020, Fande Kong wrote: >> > > > >>> > > >> > > > >>> > > > The configuration crashed earlier than before with your >> > > changes. >> > > > >>> > > > >> > > > >>> > > > Please see the attached log file when using your branch. >> The >> > > > >>> trouble lines >> > > > >>> > > > should be: >> > > > >>> > > > >> > > > >>> > > > " asub=self.mangleFortranFunction("asub") >> > > > >>> > > > cbody = "extern void "+asub+"(void);\nint main(int >> > > argc,char >> > > > >>> > > > **args)\n{\n "+asub+"();\n return 0;\n}\n"; >> > > > >>> > > > " >> > > > >>> > > > >> > > > >>> > > > Thanks, >> > > > >>> > > > >> > > > >>> > > > Fande, >> > > > >>> > > > >> > > > >>> > > > On Thu, Mar 12, 2020 at 7:06 PM Satish Balay < >> > > balay at mcs.anl.gov> >> > > > >>> wrote: >> > > > >>> > > > >> > > > >>> > > > > I can't figure out what the stack in the attached >> > > configure.log. >> > > > >>> [likely >> > > > >>> > > > > some stuff isn't getting logged in it] >> > > > >>> > > > > >> > > > >>> > > > > Can you retry with branch >> > > > >>> 'balay/fix-checkFortranLibraries/maint'? >> > > > >>> > > > > >> > > > >>> > > > > Satish >> > > > >>> > > > > >> > > > >>> > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >> > > > >>> > > > > >> > > > >>> > > > > > Thanks, Satish, >> > > > >>> > > > > > >> > > > >>> > > > > > But still have the problem. Please see the attached >> log >> > > file. >> > > > >>> > > > > > >> > > > >>> > > > > > Thanks, >> > > > >>> > > > > > >> > > > >>> > > > > > Fande. >> > > > >>> > > > > > >> > > > >>> > > > > > On Thu, Mar 12, 2020 at 3:42 PM Satish Balay < >> > > > >>> balay at mcs.anl.gov> wrote: >> > > > >>> > > > > > >> > > > >>> > > > > > > Can you retry with the attached patch? >> > > > >>> > > > > > > >> > > > >>> > > > > > > BTW: Its best to use the latest patched version - >> i.e >> > > > >>> > > > > petsc-3.12.4.tar.gz >> > > > >>> > > > > > > >> > > > >>> > > > > > > Satish >> > > > >>> > > > > > > >> > > > >>> > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >> > > > >>> > > > > > > >> > > > >>> > > > > > > > This fixed the fblaslapack issue. Now have another >> > > issue >> > > > >>> about mumps. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Please see the log file attached. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Thanks, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Fande, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > On Thu, Mar 12, 2020 at 1:38 PM Satish Balay < >> > > > >>> balay at mcs.anl.gov> >> > > > >>> > > > > wrote: >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > > For some reason - the fortran compiler libraries >> > > check >> > > > >>> worked fine >> > > > >>> > > > > > > without >> > > > >>> > > > > > > > > -lgfortran. >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > But now - flbaslapack check is failing without >> it. >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > To work arround - you can use option >> LIBS=-lgfortran >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > Satish >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > On Thu, 12 Mar 2020, Fande Kong wrote: >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > > Hi All, >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > I had an issue when configuring petsc on a >> linux >> > > > >>> machine. I have >> > > > >>> > > > > the >> > > > >>> > > > > > > > > > following error message: >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > Compiling FBLASLAPACK; this may take >> several >> > > > >>> minutes >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > >> > > > >>> >> > > >> =============================================================================== >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > TESTING: checkLib from >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > >> > > > >>> >> > > >> config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:120) >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > >> > > > >>> >> > > >> ******************************************************************************* >> > > > >>> > > > > > > > > > UNABLE to CONFIGURE with GIVEN >> OPTIONS >> > > (see >> > > > >>> > > > > > > configure.log for >> > > > >>> > > > > > > > > > details): >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > >> > > > >>> >> > > >> ------------------------------------------------------------------------------- >> > > > >>> > > > > > > > > > --download-fblaslapack libraries cannot be >> used >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > >> > > > >>> >> > > >> ******************************************************************************* >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > The configuration log was attached. >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > Thanks, >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > Fande, >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > >> > > > >>> > > > > >> > > > >>> > > > >> > > > >>> > > >> > > > >>> > >> > > > >>> >> > > > >> >> > > > >> > > >> > > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Wed Mar 18 18:47:59 2020 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Wed, 18 Mar 2020 16:47:59 -0700 Subject: [petsc-users] Rebuilding libmesh In-Reply-To: References: Message-ID: Does anyone have a suggestion for this compilation error from petscconf.h? Sorry this is with a somewhat old PETSc version: configure:34535: checking whether we can compile a trivial PETSc program configure:34564: mpicxx -c -std=gnu++11 -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include conftest.cpp >&5 In file included from /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscsys.h:14:0, from /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscbag.h:4, from /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petsc.h:5, from conftest.cpp:144: /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscconf.h:85:36: error: expected '}' before '__attribute' #define PETSC_DEPRECATED_ENUM(why) __attribute((deprecated)) ^ /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:430:76: note: in expansion of macro 'PETSC_DEPRECATED_ENUM' #define KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED KSP_DIVERGED_PCSETUP_FAILED PETSC_DEPRECATED_ENUM("Use KSP_DIVERGED_PC_FAILED (since v3.11)") ^ /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:452:15: note: in expansion of macro 'KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED' KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED = -11, On Wed, Mar 18, 2020 at 2:55 PM Lin wrote: > Hi, all, > > I met a problem with > > error: *** PETSc was not found, but --enable-petsc-required was specified. > > when I reinstalled MOOSE. However, I had been using MOOSE with no issues > previously. Does someone know how to solve it? My system is Ubuntu 18.04. > > The error is listed as following: > > Found valid MPI installation... > > note: using /opt/moose/mpich-3.3/gcc-9.2.0/include/mpi.h > > checking mpi.h usability... yes > > checking mpi.h presence... yes > > checking for mpi.h... yes > > checking /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h > usability... yes > > checking /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h > presence... yes > > checking for /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h... > yes > > <<< Found PETSc 3.11.4 installation in /opt/moose/petsc-3.11.4/mpich-3.3 > _gcc-9.2.0-opt ... >>> > > checking whether we can compile a trivial PETSc program... no > > checking for TAO support via PETSc... no > > configure: error: *** PETSc was not found, but --enable-petsc-required > was specified. > make: *** No targets specified and no makefile found. Stop. > > > > Besides, I attached my libMesh configure log file in the email. > > Regards, > Lin > > -- > You received this message because you are subscribed to the Google Groups > "moose-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/moose-users/db12322c-eae6-4ed4-b54f-3ab5e118f466%40googlegroups.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Mar 18 18:57:10 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 18 Mar 2020 17:57:10 -0600 Subject: [petsc-users] Rebuilding libmesh In-Reply-To: References: Message-ID: <87r1xpw71l.fsf@jedbrown.org> Alexander Lindsay writes: > Does anyone have a suggestion for this compilation error from petscconf.h? > Sorry this is with a somewhat old PETSc version: > > configure:34535: checking whether we can compile a trivial PETSc program > configure:34564: mpicxx -c -std=gnu++11 What do you get with `mpicxx --version`? This is usually a result of configuring PETSc with a different compiler version than you use to run. > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include conftest.cpp >&5 > In file included from > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscsys.h:14:0, > from > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscbag.h:4, > from > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petsc.h:5, > from conftest.cpp:144: > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscconf.h:85:36: > error: expected '}' before '__attribute' > #define PETSC_DEPRECATED_ENUM(why) __attribute((deprecated)) > ^ > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:430:76: > note: in expansion of macro 'PETSC_DEPRECATED_ENUM' > #define KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED KSP_DIVERGED_PCSETUP_FAILED > PETSC_DEPRECATED_ENUM("Use KSP_DIVERGED_PC_FAILED (since v3.11)") > > ^ > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:452:15: > note: in expansion of macro 'KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED' > KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED = -11, > > > On Wed, Mar 18, 2020 at 2:55 PM Lin wrote: > >> Hi, all, >> >> I met a problem with >> >> error: *** PETSc was not found, but --enable-petsc-required was specified. >> >> when I reinstalled MOOSE. However, I had been using MOOSE with no issues >> previously. Does someone know how to solve it? My system is Ubuntu 18.04. >> >> The error is listed as following: >> >> Found valid MPI installation... >> >> note: using /opt/moose/mpich-3.3/gcc-9.2.0/include/mpi.h >> >> checking mpi.h usability... yes >> >> checking mpi.h presence... yes >> >> checking for mpi.h... yes >> >> checking /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h >> usability... yes >> >> checking /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h >> presence... yes >> >> checking for /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h... >> yes >> >> <<< Found PETSc 3.11.4 installation in /opt/moose/petsc-3.11.4/mpich-3.3 >> _gcc-9.2.0-opt ... >>> >> >> checking whether we can compile a trivial PETSc program... no >> >> checking for TAO support via PETSc... no >> >> configure: error: *** PETSc was not found, but --enable-petsc-required >> was specified. >> make: *** No targets specified and no makefile found. Stop. >> >> >> >> Besides, I attached my libMesh configure log file in the email. >> >> Regards, >> Lin >> >> -- >> You received this message because you are subscribed to the Google Groups >> "moose-users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to moose-users+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/moose-users/db12322c-eae6-4ed4-b54f-3ab5e118f466%40googlegroups.com >> >> . >> > > -- > You received this message because you are subscribed to the Google Groups "moose-users" group. > To unsubscribe from this group and stop receiving emails from it, send an email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CANFcJrE%2BURQoK0UiqBEsB9yZ2Qbbj24W_S_n8qYzxOBtD41Yzw%40mail.gmail.com. From fdkong.jd at gmail.com Wed Mar 18 19:06:36 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 18 Mar 2020 18:06:36 -0600 Subject: [petsc-users] Rebuilding libmesh In-Reply-To: <87r1xpw71l.fsf@jedbrown.org> References: <87r1xpw71l.fsf@jedbrown.org> Message-ID: HI Lin, Do you have a home-brew installed MPI? " configure:6076: mpif90 -v >&5 mpifort for MPICH version 3.3 Reading specs from /home/lin/.linuxbrew/Cellar/gcc/5.5.0_7/bin/../lib/gcc/x86_64-unknown-linux-gnu/5.5.0/specs " MOOSE environment package should carry everything you need: compiler, mpi, and petsc. You could home-brew uninstall your mpi, and retry. Thanks, Fande, On Wed, Mar 18, 2020 at 5:57 PM Jed Brown wrote: > Alexander Lindsay writes: > > > Does anyone have a suggestion for this compilation error from > petscconf.h? > > Sorry this is with a somewhat old PETSc version: > > > > configure:34535: checking whether we can compile a trivial PETSc program > > configure:34564: mpicxx -c -std=gnu++11 > > What do you get with `mpicxx --version`? > > This is usually a result of configuring PETSc with a different compiler > version than you use to run. > > > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include > > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include > > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include > > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include conftest.cpp > >&5 > > In file included from > > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscsys.h:14:0, > > from > > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscbag.h:4, > > from > > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petsc.h:5, > > from conftest.cpp:144: > > > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscconf.h:85:36: > > error: expected '}' before '__attribute' > > #define PETSC_DEPRECATED_ENUM(why) __attribute((deprecated)) > > ^ > > > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:430:76: > > note: in expansion of macro 'PETSC_DEPRECATED_ENUM' > > #define KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED > KSP_DIVERGED_PCSETUP_FAILED > > PETSC_DEPRECATED_ENUM("Use KSP_DIVERGED_PC_FAILED (since v3.11)") > > > > ^ > > > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:452:15: > > note: in expansion of macro 'KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED' > > KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED = -11, > > > > > > On Wed, Mar 18, 2020 at 2:55 PM Lin wrote: > > > >> Hi, all, > >> > >> I met a problem with > >> > >> error: *** PETSc was not found, but --enable-petsc-required was > specified. > >> > >> when I reinstalled MOOSE. However, I had been using MOOSE with no issues > >> previously. Does someone know how to solve it? My system is Ubuntu > 18.04. > >> > >> The error is listed as following: > >> > >> Found valid MPI installation... > >> > >> note: using /opt/moose/mpich-3.3/gcc-9.2.0/include/mpi.h > >> > >> checking mpi.h usability... yes > >> > >> checking mpi.h presence... yes > >> > >> checking for mpi.h... yes > >> > >> checking > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h > >> usability... yes > >> > >> checking > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h > >> presence... yes > >> > >> checking for > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include/petscversion.h... > >> yes > >> > >> <<< Found PETSc 3.11.4 installation in /opt/moose/petsc-3.11.4/mpich-3.3 > >> _gcc-9.2.0-opt ... >>> > >> > >> checking whether we can compile a trivial PETSc program... no > >> > >> checking for TAO support via PETSc... no > >> > >> configure: error: *** PETSc was not found, but --enable-petsc-required > >> was specified. > >> make: *** No targets specified and no makefile found. Stop. > >> > >> > >> > >> Besides, I attached my libMesh configure log file in the email. > >> > >> Regards, > >> Lin > >> > >> -- > >> You received this message because you are subscribed to the Google > Groups > >> "moose-users" group. > >> To unsubscribe from this group and stop receiving emails from it, send > an > >> email to moose-users+unsubscribe at googlegroups.com. > >> To view this discussion on the web visit > >> > https://groups.google.com/d/msgid/moose-users/db12322c-eae6-4ed4-b54f-3ab5e118f466%40googlegroups.com > >> < > https://groups.google.com/d/msgid/moose-users/db12322c-eae6-4ed4-b54f-3ab5e118f466%40googlegroups.com?utm_medium=email&utm_source=footer > > > >> . > >> > > > > -- > > You received this message because you are subscribed to the Google > Groups "moose-users" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to moose-users+unsubscribe at googlegroups.com. > > To view this discussion on the web visit > https://groups.google.com/d/msgid/moose-users/CANFcJrE%2BURQoK0UiqBEsB9yZ2Qbbj24W_S_n8qYzxOBtD41Yzw%40mail.gmail.com > . > > -- > You received this message because you are subscribed to the Google Groups > "moose-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/moose-users/87r1xpw71l.fsf%40jedbrown.org > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Mar 18 20:40:24 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 18 Mar 2020 19:40:24 -0600 Subject: [petsc-users] Rebuilding libmesh In-Reply-To: References: <87r1xpw71l.fsf@jedbrown.org> Message-ID: <87o8stw29j.fsf@jedbrown.org> Fande Kong writes: > HI Lin, > > Do you have a home-brew installed MPI? > > " > configure:6076: mpif90 -v >&5 > mpifort for MPICH version 3.3 > Reading specs from > /home/lin/.linuxbrew/Cellar/gcc/5.5.0_7/bin/../lib/gcc/x86_64-unknown-linux-gnu/5.5.0/specs So "mpich-3.3_gcc-9.2.0" uses gcc-5.5.0? Was it also like that when you configured PETSc, or is this a result of changing environment variables? As usual, configure.log would have helped answer these sorts of questions. > " > > MOOSE environment package should carry everything you need: compiler, mpi, > and petsc. > > You could home-brew uninstall your mpi, and retry. > > Thanks, > > Fande, > > On Wed, Mar 18, 2020 at 5:57 PM Jed Brown wrote: > >> Alexander Lindsay writes: >> >> > Does anyone have a suggestion for this compilation error from >> petscconf.h? >> > Sorry this is with a somewhat old PETSc version: >> > >> > configure:34535: checking whether we can compile a trivial PETSc program >> > configure:34564: mpicxx -c -std=gnu++11 >> >> What do you get with `mpicxx --version`? >> >> This is usually a result of configuring PETSc with a different compiler >> version than you use to run. >> >> > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include >> > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt//include >> > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include >> > -I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include conftest.cpp >> >&5 >> > In file included from >> > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscsys.h:14:0, >> > from >> > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscbag.h:4, >> > from >> > /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petsc.h:5, >> > from conftest.cpp:144: >> > >> /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscconf.h:85:36: >> > error: expected '}' before '__attribute' >> > #define PETSC_DEPRECATED_ENUM(why) __attribute((deprecated)) >> > ^ >> > >> /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:430:76: >> > note: in expansion of macro 'PETSC_DEPRECATED_ENUM' >> > #define KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED >> KSP_DIVERGED_PCSETUP_FAILED >> > PETSC_DEPRECATED_ENUM("Use KSP_DIVERGED_PC_FAILED (since v3.11)") >> > >> > ^ >> > >> /opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include/petscksp.h:452:15: >> > note: in expansion of macro 'KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED' >> > KSP_DIVERGED_PCSETUP_FAILED_DEPRECATED = -11, From berend.vanwachem at ovgu.de Thu Mar 19 08:59:32 2020 From: berend.vanwachem at ovgu.de (Berend van Wachem) Date: Thu, 19 Mar 2020 14:59:32 +0100 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: <32dee05c-99f7-e28b-a9ba-56f072f066e0@ovgu.de> Dear Matt, I haven't been able to progress further with changing the Section of a P4EST DM. Therefore, what do you think of the following strategy: 1. Create the DMPlex versions of dmIn and dmOut from the subsequent P4EST DM's 2. Create the appropriate sections onto the DMPlex versions of dmIn and dmOut 3. Get the star forests and ChildIDs 4. Use the functions DMPlexTransferVecTree_Interpolate and DMPlexTransferVecTree_Inject with the DMPlex dms, star forest and ChildIDs Do you think this is a viable strategy? I'd like your opinion before I start on this journey ... Thanks and my best wishes in these difficult times, Berend. On 2020-03-13 14:50, Matthew Knepley wrote: > On Fri, Mar 13, 2020 at 9:45 AM Berend van Wachem > > wrote: > > Dear Matt, > > Thanks for your response. My understanding of the DM and DMClone is the > same - and I have tested this with a DMPLEX DM without problems. > > However, for some reason, I cannot change/set the section of a P4EST > dm. > In the attached example code, I get an error in line 140, where I > try to > create a new section from the cloned P4EST DM. Is it not possible to > create/set a section on a P4EST DM? Or maybe I am doing something else > wrong? Do you suggest a workaround? > > > Ah, I see. Let me check your example. > > Toby, is this the way p4est acts right now? > > ? Thanks, > > ? ? ?Matt > > Many thanks, Berend. > > > On 2020-03-13 00:19, Matthew Knepley wrote: > > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem > > > >> > wrote: > > > >? ? ?Dear All, > > > >? ? ?I have started to use DMPLEX with P4EST for a computational fluid > >? ? ?dynamics application.?I am solving a coupled system of 4 > discretised > >? ? ?equations (for 3 velocity components and one pressure) on a mesh. > >? ? ?However, next to these 4 variables, I also have a few single > field > >? ? ?variables (such as density and viscosity) defined over the mesh, > >? ? ?which I > >? ? ?don't solve for (they should not be part of the matrix with > unknowns). > >? ? ?Most of these variables are at the cell centers, but in a few > cases, it > >? ? ?want to define them at cell faces. > > > >? ? ?With just DMPLEX, I solve this by: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMCreateGlobalVector (and Matrix), so I get an Unknown > vector, a RHS > >? ? ?vector, and a matrix for the 4 variables. > > > >? ? ?To get a vector for a single variable at the cell center or > the cell > >? ? ?face, I clone the original DM, I define a new Section on it, > and then > >? ? ?create the vector from that which I need (e.g. for density, > >? ? ?viscosity or > >? ? ?a velocity at the cell face). > > > >? ? ?Then I loop over the mesh, and with MatSetValuesLocal, I set the > >? ? ?coefficients. After that, I solve the system for multiple > timesteps > >? ? ?(sequential solves) and get the solution vector with the 4 > variables > >? ? ?after each solve. > > > >? ? ?So-far, this works fine with DMPLEX. However, now I want to > use P4EST, > >? ? ?and I have difficulty defining a variable vector other than the > >? ? ?original 4. > > > >? ? ?I have changed the code structure: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMForestSetBaseDM(DM, DMForest) to create a DMForest > >? ? ?DMCreateGlobalVector (and Matrix), so I get a Unknown vector, > a RHS > >? ? ?vector, and a matrix for the 4 variables > > > >? ? ?then I perform multiple time-steps, > >? ? ? ? ?DMForestTemplate(DMForest -> ?DMForestPost) > >? ? ? ? ?Adapt DMForestPost > >? ? ? ? ?DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > >? ? ? ? ?DMForestTransferVec(UnknownVector , RefinedUnknownVector) > >? ? ? ? ?DMForestPost -> DMForest > >? ? ?and then DMConvert(DMForest,DMPLEX,DM) > >? ? ?and I can solve the system as usual. That also seems to work. > > > >? ? ?But my conceptual question: how can I convert the other variable > >? ? ?vectors > >? ? ?(obtained with a different section on the same DM) such as > density and > >? ? ?viscosity and faceVelocity within this framework? > > > > > > Here is my current thinking about DMs. A DM is a function space > > overlaying a topology. Much to my dismay, we > > do not have a topology object, so it hides inside DM. DMClone() > creates > > a shallow copy of the topology. We use > > this to have any number of data layouts through PetscSection, laying > > over the same underlying topology. > > > > So for each layout you have, make a separate clone. Then things like > > TransferVec() will respond to the layout in > > that clone. Certainly it works this way in Plex. I admit to not > having > > tried this for TransferVec(), but let me know if > > you have any problems. > > > > BTW, I usually use a dm for the solution, which I give to the > solver, > > say SNESSetDM(snes, dm), and then clone > > it as dmAux which has the layout for all the auxiliary fields > that are > > not involved in the solve. The Plex examples > > all use this form. > > > >? ? Thanks, > > > >? ? ? ?Matt > > > >? ? ?The DMForest has the same Section as the original DM and will > thus have > >? ? ?the space for exactly 4 variables per cell. I tried pushing > another > >? ? ?section on the DMForest and DMForestPost, but that does not > seem to > >? ? ?work. Please find attached a working example with code to do > this, > >? ? ?but I > >? ? ?get the error: > > > >? ? ?PETSC ERROR: PetscSectionGetChart() line 513 in > > > ?/usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c Wrong > >? ? ?type of object: Parameter # 1 > > > >? ? ?So, I is there a way to DMForestTransferVec my other vectors > from one > >? ? ?DMForest to DMForestPost. How can I do this? > > > >? ? ?Many thanks for your help! > > > >? ? ?Best wishes, Berend. > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From jeth8984 at colorado.edu Thu Mar 19 16:02:07 2020 From: jeth8984 at colorado.edu (Jeremy L Thompson) Date: Thu, 19 Mar 2020 15:02:07 -0600 Subject: [petsc-users] Continuous FE with Discontinuous Pressure Field Message-ID: <545aefa6-9e4f-5520-7344-2cd6b6c5d57d@colorado.edu> Good Afternoon, I have a working high-order 3D elasticity code built on PETSc + libCEED. My DM has a three component displacement FE field. I'd like to add a discontinuous pressure field (P_-1). Am I missing an obvious way to set up a discontinuous FE field? Or is a FV field a more natural choice for this sort of thing? Or a different approach? I am managing action of the FDE operator myself via libCEED, so I don't need to get too into the weeds about anything but the DM. I just need to find the best way to manage the discontinuous pressure values on each element in my DM. Thanks, Jeremy L Thompson From knepley at gmail.com Thu Mar 19 16:41:00 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Mar 2020 17:41:00 -0400 Subject: [petsc-users] Continuous FE with Discontinuous Pressure Field In-Reply-To: <545aefa6-9e4f-5520-7344-2cd6b6c5d57d@colorado.edu> References: <545aefa6-9e4f-5520-7344-2cd6b6c5d57d@colorado.edu> Message-ID: On Thu, Mar 19, 2020 at 5:05 PM Jeremy L Thompson wrote: > Good Afternoon, > > I have a working high-order 3D elasticity code built on PETSc + libCEED. > My DM has a three component displacement FE field. I'd like to add a > discontinuous pressure field (P_-1). > By P_{-1} do you mean a discontinuous linear basis? I only ask because FV is normally P_0. Anyway, PetscFE has discontinuous bases if you want to use that. You use the flag https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DUALSPACE/PetscDualSpaceLagrangeSetContinuity.html Thanks, Matt > Am I missing an obvious way to set up a discontinuous FE field? Or is a > FV field a more natural choice for this sort of thing? Or a different > approach? > > I am managing action of the FDE operator myself via libCEED, so I don't > need to get too into the weeds about anything but the DM. I just need to > find the best way to manage the discontinuous pressure values on each > element in my DM. > > Thanks, > > Jeremy L Thompson > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeth8984 at colorado.edu Thu Mar 19 17:01:46 2020 From: jeth8984 at colorado.edu (Jeremy L Thompson) Date: Thu, 19 Mar 2020 16:01:46 -0600 Subject: [petsc-users] Continuous FE with Discontinuous Pressure Field In-Reply-To: References: <545aefa6-9e4f-5520-7344-2cd6b6c5d57d@colorado.edu> Message-ID: <7c991ef0-6265-b0fc-3024-4630027c0cc5@colorado.edu> Thanks Matt, that's what I need. Jeremy On 3/19/20 3:41 PM, Matthew Knepley wrote: > On Thu, Mar 19, 2020 at 5:05 PM Jeremy L Thompson > > wrote: > > Good Afternoon, > > I have a working high-order 3D elasticity code built on PETSc + > libCEED. > My DM has a three component displacement FE field. I'd like to add a > discontinuous pressure field (P_-1). > > > By P_{-1} do you mean a discontinuous linear?basis? I only ask because > FV is normally P_0. > Anyway, PetscFE has discontinuous bases if you want to use that. You > use the flag > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DUALSPACE/PetscDualSpaceLagrangeSetContinuity.html > > ? Thanks, > > ? ? Matt > > Am I missing an obvious way to set up a discontinuous FE field? Or > is a > FV field a more natural choice for this sort of thing? Or a different > approach? > > I am managing action of the FDE operator myself via libCEED, so I > don't > need to get too into the weeds about anything but the DM. I just > need to > find the best way to manage the discontinuous pressure values on each > element in my DM. > > Thanks, > > Jeremy L Thompson > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 19 17:07:09 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Mar 2020 18:07:09 -0400 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: On Fri, Mar 13, 2020 at 9:45 AM Berend van Wachem wrote: > Dear Matt, > > Thanks for your response. My understanding of the DM and DMClone is the > same - and I have tested this with a DMPLEX DM without problems. > > However, for some reason, I cannot change/set the section of a P4EST dm. > In the attached example code, I get an error in line 140, where I try to > create a new section from the cloned P4EST DM. Is it not possible to > create/set a section on a P4EST DM? Or maybe I am doing something else > wrong? Do you suggest a workaround? > Hi Berend, Sorry I am behind. The problem on line 140 is that you call a DMPlex function (DMPlexCreateSection) with a DMForest object. That is illegal. You can, however, call that function using the Plex you get from a Forest using DMConvert(DMForestClone, DMPLEX, &plexClone). I will get your code running as soon as I can, but after you create the Section, attaching it should be fine. Thanks, Matt > Many thanks, Berend. > > > On 2020-03-13 00:19, Matthew Knepley wrote: > > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem > > > wrote: > > > > Dear All, > > > > I have started to use DMPLEX with P4EST for a computational fluid > > dynamics application. I am solving a coupled system of 4 discretised > > equations (for 3 velocity components and one pressure) on a mesh. > > However, next to these 4 variables, I also have a few single field > > variables (such as density and viscosity) defined over the mesh, > > which I > > don't solve for (they should not be part of the matrix with > unknowns). > > Most of these variables are at the cell centers, but in a few cases, > it > > want to define them at cell faces. > > > > With just DMPLEX, I solve this by: > > > > DMPlexCreateMesh, so I get an initial DM > > DMPlexCreateSection, indicating the need for 4 variables > > DMSetLocalSection > > DMCreateGlobalVector (and Matrix), so I get an Unknown vector, a RHS > > vector, and a matrix for the 4 variables. > > > > To get a vector for a single variable at the cell center or the cell > > face, I clone the original DM, I define a new Section on it, and then > > create the vector from that which I need (e.g. for density, > > viscosity or > > a velocity at the cell face). > > > > Then I loop over the mesh, and with MatSetValuesLocal, I set the > > coefficients. After that, I solve the system for multiple timesteps > > (sequential solves) and get the solution vector with the 4 variables > > after each solve. > > > > So-far, this works fine with DMPLEX. However, now I want to use > P4EST, > > and I have difficulty defining a variable vector other than the > > original 4. > > > > I have changed the code structure: > > > > DMPlexCreateMesh, so I get an initial DM > > DMPlexCreateSection, indicating the need for 4 variables > > DMSetLocalSection > > DMForestSetBaseDM(DM, DMForest) to create a DMForest > > DMCreateGlobalVector (and Matrix), so I get a Unknown vector, a RHS > > vector, and a matrix for the 4 variables > > > > then I perform multiple time-steps, > > DMForestTemplate(DMForest -> DMForestPost) > > Adapt DMForestPost > > DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > > DMForestTransferVec(UnknownVector , RefinedUnknownVector) > > DMForestPost -> DMForest > > and then DMConvert(DMForest,DMPLEX,DM) > > and I can solve the system as usual. That also seems to work. > > > > But my conceptual question: how can I convert the other variable > > vectors > > (obtained with a different section on the same DM) such as density > and > > viscosity and faceVelocity within this framework? > > > > > > Here is my current thinking about DMs. A DM is a function space > > overlaying a topology. Much to my dismay, we > > do not have a topology object, so it hides inside DM. DMClone() creates > > a shallow copy of the topology. We use > > this to have any number of data layouts through PetscSection, laying > > over the same underlying topology. > > > > So for each layout you have, make a separate clone. Then things like > > TransferVec() will respond to the layout in > > that clone. Certainly it works this way in Plex. I admit to not having > > tried this for TransferVec(), but let me know if > > you have any problems. > > > > BTW, I usually use a dm for the solution, which I give to the solver, > > say SNESSetDM(snes, dm), and then clone > > it as dmAux which has the layout for all the auxiliary fields that are > > not involved in the solve. The Plex examples > > all use this form. > > > > Thanks, > > > > Matt > > > > The DMForest has the same Section as the original DM and will thus > have > > the space for exactly 4 variables per cell. I tried pushing another > > section on the DMForest and DMForestPost, but that does not seem to > > work. Please find attached a working example with code to do this, > > but I > > get the error: > > > > PETSC ERROR: PetscSectionGetChart() line 513 in > > /usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c Wrong > > type of object: Parameter # 1 > > > > So, I is there a way to DMForestTransferVec my other vectors from one > > DMForest to DMForestPost. How can I do this? > > > > Many thanks for your help! > > > > Best wishes, Berend. > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ < > http://www.cse.buffalo.edu/~knepley/> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 19 17:39:53 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Mar 2020 18:39:53 -0400 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: Okay this runs for me. Thanks, Matt On Thu, Mar 19, 2020 at 6:07 PM Matthew Knepley wrote: > On Fri, Mar 13, 2020 at 9:45 AM Berend van Wachem < > berend.vanwachem at ovgu.de> wrote: > >> Dear Matt, >> >> Thanks for your response. My understanding of the DM and DMClone is the >> same - and I have tested this with a DMPLEX DM without problems. >> >> However, for some reason, I cannot change/set the section of a P4EST dm. >> In the attached example code, I get an error in line 140, where I try to >> create a new section from the cloned P4EST DM. Is it not possible to >> create/set a section on a P4EST DM? Or maybe I am doing something else >> wrong? Do you suggest a workaround? >> > > Hi Berend, > > Sorry I am behind. The problem on line 140 is that you call a DMPlex > function (DMPlexCreateSection) > with a DMForest object. That is illegal. You can, however, call that > function using the Plex you get from > a Forest using DMConvert(DMForestClone, DMPLEX, &plexClone). I will get > your code running as soon > as I can, but after you create the Section, attaching it should be fine. > > Thanks, > > Matt > > >> Many thanks, Berend. >> >> >> On 2020-03-13 00:19, Matthew Knepley wrote: >> > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem >> > > wrote: >> > >> > Dear All, >> > >> > I have started to use DMPLEX with P4EST for a computational fluid >> > dynamics application. I am solving a coupled system of 4 discretised >> > equations (for 3 velocity components and one pressure) on a mesh. >> > However, next to these 4 variables, I also have a few single field >> > variables (such as density and viscosity) defined over the mesh, >> > which I >> > don't solve for (they should not be part of the matrix with >> unknowns). >> > Most of these variables are at the cell centers, but in a few >> cases, it >> > want to define them at cell faces. >> > >> > With just DMPLEX, I solve this by: >> > >> > DMPlexCreateMesh, so I get an initial DM >> > DMPlexCreateSection, indicating the need for 4 variables >> > DMSetLocalSection >> > DMCreateGlobalVector (and Matrix), so I get an Unknown vector, a RHS >> > vector, and a matrix for the 4 variables. >> > >> > To get a vector for a single variable at the cell center or the cell >> > face, I clone the original DM, I define a new Section on it, and >> then >> > create the vector from that which I need (e.g. for density, >> > viscosity or >> > a velocity at the cell face). >> > >> > Then I loop over the mesh, and with MatSetValuesLocal, I set the >> > coefficients. After that, I solve the system for multiple timesteps >> > (sequential solves) and get the solution vector with the 4 variables >> > after each solve. >> > >> > So-far, this works fine with DMPLEX. However, now I want to use >> P4EST, >> > and I have difficulty defining a variable vector other than the >> > original 4. >> > >> > I have changed the code structure: >> > >> > DMPlexCreateMesh, so I get an initial DM >> > DMPlexCreateSection, indicating the need for 4 variables >> > DMSetLocalSection >> > DMForestSetBaseDM(DM, DMForest) to create a DMForest >> > DMCreateGlobalVector (and Matrix), so I get a Unknown vector, a RHS >> > vector, and a matrix for the 4 variables >> > >> > then I perform multiple time-steps, >> > DMForestTemplate(DMForest -> DMForestPost) >> > Adapt DMForestPost >> > DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) >> > DMForestTransferVec(UnknownVector , RefinedUnknownVector) >> > DMForestPost -> DMForest >> > and then DMConvert(DMForest,DMPLEX,DM) >> > and I can solve the system as usual. That also seems to work. >> > >> > But my conceptual question: how can I convert the other variable >> > vectors >> > (obtained with a different section on the same DM) such as density >> and >> > viscosity and faceVelocity within this framework? >> > >> > >> > Here is my current thinking about DMs. A DM is a function space >> > overlaying a topology. Much to my dismay, we >> > do not have a topology object, so it hides inside DM. DMClone() creates >> > a shallow copy of the topology. We use >> > this to have any number of data layouts through PetscSection, laying >> > over the same underlying topology. >> > >> > So for each layout you have, make a separate clone. Then things like >> > TransferVec() will respond to the layout in >> > that clone. Certainly it works this way in Plex. I admit to not having >> > tried this for TransferVec(), but let me know if >> > you have any problems. >> > >> > BTW, I usually use a dm for the solution, which I give to the solver, >> > say SNESSetDM(snes, dm), and then clone >> > it as dmAux which has the layout for all the auxiliary fields that are >> > not involved in the solve. The Plex examples >> > all use this form. >> > >> > Thanks, >> > >> > Matt >> > >> > The DMForest has the same Section as the original DM and will thus >> have >> > the space for exactly 4 variables per cell. I tried pushing another >> > section on the DMForest and DMForestPost, but that does not seem to >> > work. Please find attached a working example with code to do this, >> > but I >> > get the error: >> > >> > PETSC ERROR: PetscSectionGetChart() line 513 in >> > /usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c Wrong >> > type of object: Parameter # 1 >> > >> > So, I is there a way to DMForestTransferVec my other vectors from >> one >> > DMForest to DMForestPost. How can I do this? >> > >> > Many thanks for your help! >> > >> > Best wishes, Berend. >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> > experiments is infinitely more interesting than any results to which >> > their experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ < >> http://www.cse.buffalo.edu/~knepley/> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dmplexp4est.c Type: application/octet-stream Size: 7713 bytes Desc: not available URL: From boyceg at gmail.com Fri Mar 20 13:39:57 2020 From: boyceg at gmail.com (Boyce Griffith) Date: Fri, 20 Mar 2020 14:39:57 -0400 Subject: [petsc-users] HDF5 errors Message-ID: Homebrew just updated my hdf5, and now I am getting some errors from the HDF5 viewer in maint: /Users/boyceg/sfw/petsc/petsc-maint/src/sys/classes/viewer/impls/hdf5/hdf5v.c:1064:73: error: too few arguments to function call, expected 5, have 4 PetscStackCallHDF5(H5Oget_info_by_name,(h5, name, &info, H5P_DEFAULT)); ~~~~~~~~~~~~~~~~~~~ ^ /Users/boyceg/sfw/petsc/petsc-maint/include/petsc/private/viewerhdf5impl.h:11:42: note: expanded from macro 'PetscStackCallHDF5' PetscStackPush(#func);_status = func args;PetscStackPop; if (_status) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error in HDF5 call %s() Status %d",#func,(int)_status); \ ~~~~ ^ /usr/local/include/H5Opublic.h:188:8: note: 'H5Oget_info_by_name3' declared here H5_DLL herr_t H5Oget_info_by_name3(hid_t loc_id, const char *name, H5O_info2_t *oinfo, ^ 1 error generated. gmake[2]: *** [gmakefile:156: darwin-dbg/obj/sys/classes/viewer/impls/hdf5/hdf5v.o] Error 1 From jed at jedbrown.org Fri Mar 20 13:45:20 2020 From: jed at jedbrown.org (Jed Brown) Date: Fri, 20 Mar 2020 12:45:20 -0600 Subject: [petsc-users] HDF5 errors In-Reply-To: References: Message-ID: <87o8squapr.fsf@jedbrown.org> HDF5-1.12 changed some interfaces. This works in 'master' and we have a release coming this week. Is that okay for you or do you need the fixes backported? Boyce Griffith writes: > Homebrew just updated my hdf5, and now I am getting some errors from the HDF5 viewer in maint: > > /Users/boyceg/sfw/petsc/petsc-maint/src/sys/classes/viewer/impls/hdf5/hdf5v.c:1064:73: error: too few arguments to function call, expected 5, have 4 > PetscStackCallHDF5(H5Oget_info_by_name,(h5, name, &info, H5P_DEFAULT)); > ~~~~~~~~~~~~~~~~~~~ ^ > /Users/boyceg/sfw/petsc/petsc-maint/include/petsc/private/viewerhdf5impl.h:11:42: note: expanded from macro 'PetscStackCallHDF5' > PetscStackPush(#func);_status = func args;PetscStackPop; if (_status) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error in HDF5 call %s() Status %d",#func,(int)_status); \ > ~~~~ ^ > /usr/local/include/H5Opublic.h:188:8: note: 'H5Oget_info_by_name3' declared here > H5_DLL herr_t H5Oget_info_by_name3(hid_t loc_id, const char *name, H5O_info2_t *oinfo, > ^ > 1 error generated. > gmake[2]: *** [gmakefile:156: darwin-dbg/obj/sys/classes/viewer/impls/hdf5/hdf5v.o] Error 1 From balay at mcs.anl.gov Fri Mar 20 13:45:54 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 20 Mar 2020 13:45:54 -0500 Subject: [petsc-users] HDF5 errors In-Reply-To: References: Message-ID: This is due to upgrading hdf5 from 1.10 to 1.12 petsc/master is updated to work with hdf5-1.12 However petsc/maint is set hdf5-1.10. So I'm not sure what to suggest. Perhaps backport the following change to maint? diff --git a/include/petsc/private/viewerhdf5impl.h b/include/petsc/private/viewerhdf5impl.h index 00b845d525..d5be7294cc 100644 --- a/include/petsc/private/viewerhdf5impl.h +++ b/include/petsc/private/viewerhdf5impl.h @@ -2,6 +2,11 @@ #ifndef __VIEWERHDF5IMPL_H #define __VIEWERHDF5IMPL_H +#if defined(H5_VERSION) +# error "viewerhdf5impl.h must be included *before* any other HDF5 headers" +#else +# define H5_USE_18_API +#endif #include #if defined(PETSC_HAVE_HDF5) Satish On Fri, 20 Mar 2020, Boyce Griffith wrote: > Homebrew just updated my hdf5, and now I am getting some errors from the HDF5 viewer in maint: > > /Users/boyceg/sfw/petsc/petsc-maint/src/sys/classes/viewer/impls/hdf5/hdf5v.c:1064:73: error: too few arguments to function call, expected 5, have 4 > PetscStackCallHDF5(H5Oget_info_by_name,(h5, name, &info, H5P_DEFAULT)); > ~~~~~~~~~~~~~~~~~~~ ^ > /Users/boyceg/sfw/petsc/petsc-maint/include/petsc/private/viewerhdf5impl.h:11:42: note: expanded from macro 'PetscStackCallHDF5' > PetscStackPush(#func);_status = func args;PetscStackPop; if (_status) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error in HDF5 call %s() Status %d",#func,(int)_status); \ > ~~~~ ^ > /usr/local/include/H5Opublic.h:188:8: note: 'H5Oget_info_by_name3' declared here > H5_DLL herr_t H5Oget_info_by_name3(hid_t loc_id, const char *name, H5O_info2_t *oinfo, > ^ > 1 error generated. > gmake[2]: *** [gmakefile:156: darwin-dbg/obj/sys/classes/viewer/impls/hdf5/hdf5v.o] Error 1 > From boyceg at gmail.com Fri Mar 20 13:46:04 2020 From: boyceg at gmail.com (Boyce Griffith) Date: Fri, 20 Mar 2020 14:46:04 -0400 Subject: [petsc-users] HDF5 errors In-Reply-To: <87o8squapr.fsf@jedbrown.org> References: <87o8squapr.fsf@jedbrown.org> Message-ID: <6D59ABC5-E770-45B6-8144-C3B55E8334E1@gmail.com> That?s OK for me. I can grab an older HDF5. > On Mar 20, 2020, at 2:45 PM, Jed Brown wrote: > > HDF5-1.12 changed some interfaces. This works in 'master' and we have a > release coming this week. Is that okay for you or do you need the fixes > backported? > > Boyce Griffith writes: > >> Homebrew just updated my hdf5, and now I am getting some errors from the HDF5 viewer in maint: >> >> /Users/boyceg/sfw/petsc/petsc-maint/src/sys/classes/viewer/impls/hdf5/hdf5v.c:1064:73: error: too few arguments to function call, expected 5, have 4 >> PetscStackCallHDF5(H5Oget_info_by_name,(h5, name, &info, H5P_DEFAULT)); >> ~~~~~~~~~~~~~~~~~~~ ^ >> /Users/boyceg/sfw/petsc/petsc-maint/include/petsc/private/viewerhdf5impl.h:11:42: note: expanded from macro 'PetscStackCallHDF5' >> PetscStackPush(#func);_status = func args;PetscStackPop; if (_status) SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_LIB,"Error in HDF5 call %s() Status %d",#func,(int)_status); \ >> ~~~~ ^ >> /usr/local/include/H5Opublic.h:188:8: note: 'H5Oget_info_by_name3' declared here >> H5_DLL herr_t H5Oget_info_by_name3(hid_t loc_id, const char *name, H5O_info2_t *oinfo, >> ^ >> 1 error generated. >> gmake[2]: *** [gmakefile:156: darwin-dbg/obj/sys/classes/viewer/impls/hdf5/hdf5v.o] Error 1 From xliu29 at ncsu.edu Fri Mar 20 17:25:03 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Fri, 20 Mar 2020 15:25:03 -0700 Subject: [petsc-users] Inquiry about the interpolation and restriction matrix for PCMG Message-ID: Hi, Petsc team, I am practising PCMG using the following case (3DQ1) https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex45.c.html I have several questions. *1) About the interpolation and restriction matrix* For both Galerkin and non-Galerkin type, the interpolation matrix P is based on the trilinear interpolation (*I found the source code*) and the restriction one R is 1/8*P^T? *Could you please tell me where the source code to define the restriction matrix is? * *2) About the operator on the coarse level* It is straightforward to calculate the operator on the coarse level for Galerkin type by R*A*P. But how did you define the operator for non-Galerkin type? Did you use DMRestrict ? Could you please tell me where is the source code to define (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); in 2933: PetscErrorCode DMRestrict(DM fine,Mat restrct,Vec rscale,Mat inject,DM coarse) 2934: { 2935: PetscErrorCode ierr; 2936: DMCoarsenHookLink link; 2939: for (link=fine->coarsenhook; link; link=link->next) { 2940: if (link->restricthook) { 2941: (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); 2942: } 2943: } 2944: return(0); 2945: } on https://www.mcs.anl.gov/petsc/petsc-current/src/dm/interface/dm.c.html#DMRestrict Thanks a lot ! Take care! Xiaodong Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Mar 20 19:24:22 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 20 Mar 2020 20:24:22 -0400 Subject: [petsc-users] Inquiry about the interpolation and restriction matrix for PCMG In-Reply-To: References: Message-ID: On Fri, Mar 20, 2020 at 6:26 PM Xiaodong Liu wrote: > Hi, Petsc team, > > I am practising PCMG using the following case (3DQ1) > > https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex45.c.html > > > I have several questions. > *1) About the interpolation and restriction matrix* > > For both Galerkin and non-Galerkin type, the interpolation matrix P is > based on the trilinear interpolation (*I found the source code*) and the > restriction one R is 1/8*P^T? > That sounds right if the course grid operator is not Galerkin. If it Galerkin you do not need the 1/8 scaling. > *Could you please tell me where the source code to define the restriction > matrix is? * > *2) About the operator on the coarse level* > > It is straightforward to calculate the operator on the coarse level for > Galerkin type by R*A*P. But how did you define the operator for > non-Galerkin type? Did you use DMRestrict ? > When I have these questions I just search for where restricthook is set. Or run a debugger and step through the code. Maybe someone else knows this. > Could you please tell me where is the source code to define > (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); > in > 2933: PetscErrorCode DMRestrict(DM fine,Mat restrct,Vec rscale,Mat > inject,DM coarse) > 2934: { > 2935: PetscErrorCode ierr; > 2936: DMCoarsenHookLink link; > > 2939: for (link=fine->coarsenhook; link; link=link->next) { > 2940: if (link->restricthook) { > 2941: > (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); > 2942: } > 2943: } > 2944: return(0); > 2945: } > on > > https://www.mcs.anl.gov/petsc/petsc-current/src/dm/interface/dm.c.html#DMRestrict > > > Thanks a lot ! > Take care! > Xiaodong > > > > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 20 20:50:37 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 20 Mar 2020 21:50:37 -0400 Subject: [petsc-users] Inquiry about the interpolation and restriction matrix for PCMG In-Reply-To: References: Message-ID: On Fri, Mar 20, 2020 at 6:26 PM Xiaodong Liu wrote: > Hi, Petsc team, > > I am practising PCMG using the following case (3DQ1) > > https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex45.c.html > > > I have several questions. > *1) About the interpolation and restriction matrix* > > For both Galerkin and non-Galerkin type, the interpolation matrix P is > based on the trilinear interpolation (*I found the source code*) and the > restriction one R is 1/8*P^T? > *Could you please tell me where the source code to define the restriction > matrix is? * > *2) About the operator on the coarse level* > > It is straightforward to calculate the operator on the coarse level for > Galerkin type by R*A*P. But how did you define the operator for > non-Galerkin type? > non-Galerkin usually means rediscretization, where the user explicitly supplies the coarser operations, using https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetOperators.html Thanks, Matt > Did you use DMRestrict ? Could you please tell me where is the source code > to define > (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); > in > 2933: PetscErrorCode DMRestrict(DM fine,Mat restrct,Vec rscale,Mat > inject,DM coarse) > 2934: { > 2935: PetscErrorCode ierr; > 2936: DMCoarsenHookLink link; > > 2939: for (link=fine->coarsenhook; link; link=link->next) { > 2940: if (link->restricthook) { > 2941: > (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); > 2942: } > 2943: } > 2944: return(0); > 2945: } > on > > https://www.mcs.anl.gov/petsc/petsc-current/src/dm/interface/dm.c.html#DMRestrict > > > Thanks a lot ! > Take care! > Xiaodong > > > > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Fri Mar 20 22:28:31 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Fri, 20 Mar 2020 20:28:31 -0700 Subject: [petsc-users] Inquiry about the interpolation and restriction matrix for PCMG In-Reply-To: References: Message-ID: Thanks a lot, Matthew and Mark ! Take care. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Fri, Mar 20, 2020 at 6:50 PM Matthew Knepley wrote: > On Fri, Mar 20, 2020 at 6:26 PM Xiaodong Liu wrote: > >> Hi, Petsc team, >> >> I am practising PCMG using the following case (3DQ1) >> >> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex45.c.html >> >> >> I have several questions. >> *1) About the interpolation and restriction matrix* >> >> For both Galerkin and non-Galerkin type, the interpolation matrix P is >> based on the trilinear interpolation (*I found the source code*) and the >> restriction one R is 1/8*P^T? >> *Could you please tell me where the source code to define the >> restriction matrix is? * >> *2) About the operator on the coarse level* >> >> It is straightforward to calculate the operator on the coarse level for >> Galerkin type by R*A*P. But how did you define the operator for >> non-Galerkin type? >> > > non-Galerkin usually means rediscretization, where the user explicitly > supplies the coarser operations, using > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetOperators.html > > Thanks, > > Matt > > >> Did you use DMRestrict ? Could you please tell me where is the source >> code to define >> (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); >> in >> 2933: PetscErrorCode DMRestrict(DM fine,Mat restrct,Vec rscale,Mat >> inject,DM coarse) >> 2934: { >> 2935: PetscErrorCode ierr; >> 2936: DMCoarsenHookLink link; >> >> 2939: for (link=fine->coarsenhook; link; link=link->next) { >> 2940: if (link->restricthook) { >> 2941: >> (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); >> 2942: } >> 2943: } >> 2944: return(0); >> 2945: } >> on >> >> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/interface/dm.c.html#DMRestrict >> >> >> Thanks a lot ! >> Take care! >> Xiaodong >> >> >> >> Xiaodong Liu, PhD >> X: Computational Physics Division >> Los Alamos National Laboratory >> P.O. Box 1663, >> Los Alamos, NM 87544 >> 505-709-0534 >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swarnava89 at gmail.com Sat Mar 21 01:40:28 2020 From: swarnava89 at gmail.com (Swarnava Ghosh) Date: Fri, 20 Mar 2020 23:40:28 -0700 Subject: [petsc-users] vertices on the boundary of a DMPLEX mesh Message-ID: Dear PETSc team, I had the following question related to DMPLEX. I have a dmplex mesh created through DMPlexCreateFromCellList. I want to find out all the vertices that are on the mesh boundary. What is the best way to do this? Thank you, SG -------------- next part -------------- An HTML attachment was scrubbed... URL: From berend.vanwachem at ovgu.de Sat Mar 21 05:25:47 2020 From: berend.vanwachem at ovgu.de (Berend van Wachem) Date: Sat, 21 Mar 2020 11:25:47 +0100 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: <1e0c7876-1403-ba33-e609-3afd141a3f62@ovgu.de> Thanks Matt, This indeed works. Thank you very much! Best wishes, Berend. On 2020-03-19 23:39, Matthew Knepley wrote: > Okay this runs for me. > > ? Thanks, > > ? ? Matt > > On Thu, Mar 19, 2020 at 6:07 PM Matthew Knepley > wrote: > > On Fri, Mar 13, 2020 at 9:45 AM Berend van Wachem > > wrote: > > Dear Matt, > > Thanks for your response. My understanding of the DM and DMClone > is the > same - and I have tested this with a DMPLEX DM without problems. > > However, for some reason, I cannot change/set the section of a > P4EST dm. > In the attached example code, I get an error in line 140, where > I try to > create a new section from the cloned P4EST DM. Is it not > possible to > create/set a section on a P4EST DM? Or maybe I am doing > something else > wrong? Do you suggest a workaround? > > > Hi Berend, > > Sorry I am behind. The problem on line 140 is that you call a DMPlex > function (DMPlexCreateSection) > with a DMForest object. That is illegal. You can, however, call that > function using the Plex you get from > a Forest using DMConvert(DMForestClone, DMPLEX, &plexClone). I will > get your code running as soon > as I can, but after you create the Section, attaching it should be fine. > > ? Thanks, > > ? ? ?Matt > > Many thanks, Berend. > > > On 2020-03-13 00:19, Matthew Knepley wrote: > > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem > > > >> wrote: > > > >? ? ?Dear All, > > > >? ? ?I have started to use DMPLEX with P4EST for a > computational fluid > >? ? ?dynamics application.?I am solving a coupled system of 4 > discretised > >? ? ?equations (for 3 velocity components and one pressure) on > a mesh. > >? ? ?However, next to these 4 variables, I also have a few > single field > >? ? ?variables (such as density and viscosity) defined over > the mesh, > >? ? ?which I > >? ? ?don't solve for (they should not be part of the matrix > with unknowns). > >? ? ?Most of these variables are at the cell centers, but in a > few cases, it > >? ? ?want to define them at cell faces. > > > >? ? ?With just DMPLEX, I solve this by: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMCreateGlobalVector (and Matrix), so I get an Unknown > vector, a RHS > >? ? ?vector, and a matrix for the 4 variables. > > > >? ? ?To get a vector for a single variable at the cell center > or the cell > >? ? ?face, I clone the original DM, I define a new Section on > it, and then > >? ? ?create the vector from that which I need (e.g. for density, > >? ? ?viscosity or > >? ? ?a velocity at the cell face). > > > >? ? ?Then I loop over the mesh, and with MatSetValuesLocal, I > set the > >? ? ?coefficients. After that, I solve the system for multiple > timesteps > >? ? ?(sequential solves) and get the solution vector with the > 4 variables > >? ? ?after each solve. > > > >? ? ?So-far, this works fine with DMPLEX. However, now I want > to use P4EST, > >? ? ?and I have difficulty defining a variable vector other > than the > >? ? ?original 4. > > > >? ? ?I have changed the code structure: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMForestSetBaseDM(DM, DMForest) to create a DMForest > >? ? ?DMCreateGlobalVector (and Matrix), so I get a Unknown > vector, a RHS > >? ? ?vector, and a matrix for the 4 variables > > > >? ? ?then I perform multiple time-steps, > >? ? ? ? ?DMForestTemplate(DMForest -> ?DMForestPost) > >? ? ? ? ?Adapt DMForestPost > >? ? ? ? ?DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > >? ? ? ? ?DMForestTransferVec(UnknownVector , RefinedUnknownVector) > >? ? ? ? ?DMForestPost -> DMForest > >? ? ?and then DMConvert(DMForest,DMPLEX,DM) > >? ? ?and I can solve the system as usual. That also seems to work. > > > >? ? ?But my conceptual question: how can I convert the other > variable > >? ? ?vectors > >? ? ?(obtained with a different section on the same DM) such > as density and > >? ? ?viscosity and faceVelocity within this framework? > > > > > > Here is my current thinking about DMs. A DM is a function space > > overlaying a topology. Much to my dismay, we > > do not have a topology object, so it hides inside DM. > DMClone() creates > > a shallow copy of the topology. We use > > this to have any number of data layouts through PetscSection, > laying > > over the same underlying topology. > > > > So for each layout you have, make a separate clone. Then > things like > > TransferVec() will respond to the layout in > > that clone. Certainly it works this way in Plex. I admit to > not having > > tried this for TransferVec(), but let me know if > > you have any problems. > > > > BTW, I usually use a dm for the solution, which I give to the > solver, > > say SNESSetDM(snes, dm), and then clone > > it as dmAux which has the layout for all the auxiliary fields > that are > > not involved in the solve. The Plex examples > > all use this form. > > > >? ? Thanks, > > > >? ? ? ?Matt > > > >? ? ?The DMForest has the same Section as the original DM and > will thus have > >? ? ?the space for exactly 4 variables per cell. I tried > pushing another > >? ? ?section on the DMForest and DMForestPost, but that does > not seem to > >? ? ?work. Please find attached a working example with code to > do this, > >? ? ?but I > >? ? ?get the error: > > > >? ? ?PETSC ERROR: PetscSectionGetChart() line 513 in > > > ?/usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c > Wrong > >? ? ?type of object: Parameter # 1 > > > >? ? ?So, I is there a way to DMForestTransferVec my other > vectors from one > >? ? ?DMForest to DMForestPost. How can I do this? > > > >? ? ?Many thanks for your help! > > > >? ? ?Best wishes, Berend. > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results > to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From jed at jedbrown.org Sat Mar 21 07:34:13 2020 From: jed at jedbrown.org (Jed Brown) Date: Sat, 21 Mar 2020 06:34:13 -0600 Subject: [petsc-users] vertices on the boundary of a DMPLEX mesh In-Reply-To: References: Message-ID: <87d0957upm.fsf@jedbrown.org> Swarnava Ghosh writes: > Dear PETSc team, > > I had the following question related to DMPLEX. > I have a dmplex mesh created through DMPlexCreateFromCellList. I want to > find out all the vertices that are on the mesh boundary. What is the best > way to do this? DMLabelCreate(PETSC_COMM_SELF, "boundary", &label); DMPlexMarkBoundaryFaces(dm, 1, label); DMPlexLabelComplete(dm, label); DMPlexGetDepthStratum(dm, 0, &vstart, &vend); for (p=vstart; p References: Message-ID: The issue is that, I tested non-Galerkin type without setting up coarse operators explicitly using PCMGSetOperators, but it worked pretty well. Thanks, Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Fri, Mar 20, 2020 at 8:28 PM Xiaodong Liu wrote: > Thanks a lot, Matthew and Mark ! > Take care. > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > > > On Fri, Mar 20, 2020 at 6:50 PM Matthew Knepley wrote: > >> On Fri, Mar 20, 2020 at 6:26 PM Xiaodong Liu wrote: >> >>> Hi, Petsc team, >>> >>> I am practising PCMG using the following case (3DQ1) >>> >>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex45.c.html >>> >>> >>> I have several questions. >>> *1) About the interpolation and restriction matrix* >>> >>> For both Galerkin and non-Galerkin type, the interpolation matrix P is >>> based on the trilinear interpolation (*I found the source code*) and >>> the restriction one R is 1/8*P^T? >>> *Could you please tell me where the source code to define the >>> restriction matrix is? * >>> *2) About the operator on the coarse level* >>> >>> It is straightforward to calculate the operator on the coarse level >>> for Galerkin type by R*A*P. But how did you define the operator for >>> non-Galerkin type? >>> >> >> non-Galerkin usually means rediscretization, where the user explicitly >> supplies the coarser operations, using >> >> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetOperators.html >> >> Thanks, >> >> Matt >> >> >>> Did you use DMRestrict ? Could you please tell me where is the source >>> code to define >>> (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); >>> in >>> 2933: PetscErrorCode DMRestrict(DM fine,Mat restrct,Vec rscale,Mat >>> inject,DM coarse) >>> 2934: { >>> 2935: PetscErrorCode ierr; >>> 2936: DMCoarsenHookLink link; >>> >>> 2939: for (link=fine->coarsenhook; link; link=link->next) { >>> 2940: if (link->restricthook) { >>> 2941: >>> (*link->restricthook)(fine,restrct,rscale,inject,coarse,link->ctx); >>> 2942: } >>> 2943: } >>> 2944: return(0); >>> 2945: } >>> on >>> >>> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/interface/dm.c.html#DMRestrict >>> >>> >>> Thanks a lot ! >>> Take care! >>> Xiaodong >>> >>> >>> >>> Xiaodong Liu, PhD >>> X: Computational Physics Division >>> Los Alamos National Laboratory >>> P.O. Box 1663, >>> Los Alamos, NM 87544 >>> 505-709-0534 >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Mon Mar 23 06:45:08 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Mon, 23 Mar 2020 12:45:08 +0100 Subject: [petsc-users] Construct Matrix based on row and column values Message-ID: Dear Users, I want to implement a FCT (flux corrected transport) scheme with PETSc. To this end I have amongst other things create a Matrix whose entries are given by L_ij = -max(0, A_ij, A_ji) for i neq j L_ii = Sum_{j=0,..n, j neq i} L_ij where Mat A is an (non-symmetric) Input Matrix created beforehand. I was wondering how to do this. My first search brought me to https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html but this just goes over the rows of one matrix to set new values and now I would need to run over the rows and columns of the matrix. My Idea was to just create a transpose of A and do the same but then the row-layout will be different and I can't use the same for loop for A and AT and thus also won't be able to calculate the max's above. Any help would be appreciated Best regards Elias From knepley at gmail.com Mon Mar 23 07:02:34 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2020 08:02:34 -0400 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: References: Message-ID: On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas wrote: > Dear Users, > > I want to implement a FCT (flux corrected transport) scheme with PETSc. > To this end I have amongst other things create a Matrix whose entries > are given by > > L_ij = -max(0, A_ij, A_ji) for i neq j > > L_ii = Sum_{j=0,..n, j neq i} L_ij > > where Mat A is an (non-symmetric) Input Matrix created beforehand. > > I was wondering how to do this. My first search brought me to > > https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html > > > but this just goes over the rows of one matrix to set new values and now > I would need to run over the rows and columns of the matrix. My Idea was > to just create a transpose of A and do the same but then the row-layout > will be different and I can't use the same for loop for A and AT and > thus also won't be able to calculate the max's above. > > Any help would be appreciated > I think it would likely be much easier to write your algorithm directly on the mesh, rather than using matrices, since the locality information is explicit with the mesh, but has to be reconstructed with the matrix. The problem here is that in parallel there would be no easy way to get the halo you need using a matrix. You really want the ghosted space for assembly, and that is provided by the DM objects. Does this make sense? Unless anybody in PETSc has a better idea. Thanks, Matt > Best regards > > Elias > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Mon Mar 23 07:31:26 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Mon, 23 Mar 2020 13:31:26 +0100 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: References: Message-ID: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> Dear Matt, I've just found this answer from 2014 https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html wondering if this would theoretically work. And the thing with this FCT-Schemes is, that they're build on purely algebraic considerations (like AMG) so I don't want to break it back down to mesh information if possible at all. Best regards Elias On 23/03/2020 13:02, Matthew Knepley wrote: > On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas > > wrote: > > Dear Users, > > I want to implement a FCT (flux corrected transport) scheme with > PETSc. > To this end I have amongst other things create a Matrix whose entries > are given by > > L_ij = -max(0, A_ij, A_ji) for i neq j > > L_ii = Sum_{j=0,..n, j neq i} L_ij > > where Mat A is an (non-symmetric) Input Matrix created beforehand. > > I was wondering how to do this. My first search brought me to > https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html > > > > but this just goes over the rows of one matrix to set new values > and now > I would need to run over the rows and columns of the matrix. My > Idea was > to just create a transpose of A and do the same but then the > row-layout > will be different and I can't use the same for loop for A and AT and > thus also won't be able to calculate the max's above. > > Any help would be appreciated > > > I think it would likely be much easier to write your algorithm > directly on the mesh, rather than using matrices, since the locality > information is explicit with the mesh, but has to be reconstructed > with the matrix. > > The problem here is that in parallel there would be no easy way to get > the halo you need using a matrix. You > really want the ghosted space for assembly, and that is provided by > the DM objects. Does this make sense? > Unless anybody in PETSc has a better idea. > > ? Thanks, > > ? ? ?Matt > > Best regards > > Elias > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 23 07:36:34 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2020 08:36:34 -0400 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> References: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> Message-ID: On Mon, Mar 23, 2020 at 8:31 AM Elias Karabelas wrote: > Dear Matt, > > I've just found this answer from 2014 > > https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html > > wondering if this would theoretically work. > > In serial certainly, I just don't see how it works in parallel since you might not own the row you need from the transpose. > And the thing with this FCT-Schemes is, that they're build on purely > algebraic considerations (like AMG) so I don't want to break it back down > to mesh information if possible at all. > The FEM-FCT I am familiar with from Lohner was phrased on a mesh. Thanks, Matt > Best regards > > Elias > On 23/03/2020 13:02, Matthew Knepley wrote: > > On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas > wrote: > >> Dear Users, >> >> I want to implement a FCT (flux corrected transport) scheme with PETSc. >> To this end I have amongst other things create a Matrix whose entries >> are given by >> >> L_ij = -max(0, A_ij, A_ji) for i neq j >> >> L_ii = Sum_{j=0,..n, j neq i} L_ij >> >> where Mat A is an (non-symmetric) Input Matrix created beforehand. >> >> I was wondering how to do this. My first search brought me to >> >> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >> >> >> but this just goes over the rows of one matrix to set new values and now >> I would need to run over the rows and columns of the matrix. My Idea was >> to just create a transpose of A and do the same but then the row-layout >> will be different and I can't use the same for loop for A and AT and >> thus also won't be able to calculate the max's above. >> >> Any help would be appreciated >> > > I think it would likely be much easier to write your algorithm directly on > the mesh, rather than using matrices, since the locality information is > explicit with the mesh, but has to be reconstructed with the matrix. > > The problem here is that in parallel there would be no easy way to get the > halo you need using a matrix. You > really want the ghosted space for assembly, and that is provided by the DM > objects. Does this make sense? > Unless anybody in PETSc has a better idea. > > Thanks, > > Matt > > >> Best regards >> >> Elias >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Mon Mar 23 07:37:58 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Mon, 23 Mar 2020 13:37:58 +0100 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: References: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> Message-ID: <740f614d-49da-1688-66fe-6c4debe9af04@gmail.com> On 23/03/2020 13:36, Matthew Knepley wrote: > On Mon, Mar 23, 2020 at 8:31 AM Elias Karabelas > > wrote: > > Dear Matt, > > I've just found this answer from 2014 > > https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html > > wondering if this would theoretically work. > > In serial certainly, I just don't see how it works in parallel since > you might not own the row you need from the transpose. > > And the thing with this FCT-Schemes is, that they're build on > purely algebraic considerations (like AMG) so I don't want to > break it back down to mesh information if possible at all. > > The FEM-FCT I am familiar with from Lohner was phrased on a mesh. Can you give me a reference to that? I based my things on this work https://www.sciencedirect.com/science/article/pii/S0045782508003150#! Best regards Elias > > ? Thanks, > > ? ? Matt > > Best regards > > Elias > > On 23/03/2020 13:02, Matthew Knepley wrote: >> On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas >> > wrote: >> >> Dear Users, >> >> I want to implement a FCT (flux corrected transport) scheme >> with PETSc. >> To this end I have amongst other things create a Matrix whose >> entries >> are given by >> >> L_ij = -max(0, A_ij, A_ji) for i neq j >> >> L_ii = Sum_{j=0,..n, j neq i} L_ij >> >> where Mat A is an (non-symmetric) Input Matrix created >> beforehand. >> >> I was wondering how to do this. My first search brought me to >> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >> >> >> >> but this just goes over the rows of one matrix to set new >> values and now >> I would need to run over the rows and columns of the matrix. >> My Idea was >> to just create a transpose of A and do the same but then the >> row-layout >> will be different and I can't use the same for loop for A and >> AT and >> thus also won't be able to calculate the max's above. >> >> Any help would be appreciated >> >> >> I think it would likely be much easier to write your algorithm >> directly on the mesh, rather than using matrices, since the >> locality information is explicit with the mesh, but has to be >> reconstructed with the matrix. >> >> The problem here is that in parallel there would be no easy way >> to get the halo you need using a matrix. You >> really want the ghosted space for assembly, and that is provided >> by the DM objects. Does this make sense? >> Unless anybody in PETSc has a better idea. >> >> ? Thanks, >> >> ? ? ?Matt >> >> Best regards >> >> Elias >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 23 07:39:25 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2020 08:39:25 -0400 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <740f614d-49da-1688-66fe-6c4debe9af04@gmail.com> References: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> <740f614d-49da-1688-66fe-6c4debe9af04@gmail.com> Message-ID: On Mon, Mar 23, 2020 at 8:38 AM Elias Karabelas wrote: > > On 23/03/2020 13:36, Matthew Knepley wrote: > > On Mon, Mar 23, 2020 at 8:31 AM Elias Karabelas > wrote: > >> Dear Matt, >> >> I've just found this answer from 2014 >> >> https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html >> >> wondering if this would theoretically work. >> > In serial certainly, I just don't see how it works in parallel since you > might not own the row you need from the transpose. > >> And the thing with this FCT-Schemes is, that they're build on purely >> algebraic considerations (like AMG) so I don't want to break it back down >> to mesh information if possible at all. >> > The FEM-FCT I am familiar with from Lohner was phrased on a mesh. > > Can you give me a reference to that? I based my things on this work > https://www.sciencedirect.com/science/article/pii/S0045782508003150#! > Volker is of course great. I believe I was thinking of https://onlinelibrary.wiley.com/doi/abs/10.1002/fld.1650071007 Thanks, Matt > Best regards > > Elias > > > > Thanks, > > Matt > >> Best regards >> >> Elias >> On 23/03/2020 13:02, Matthew Knepley wrote: >> >> On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas >> wrote: >> >>> Dear Users, >>> >>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>> To this end I have amongst other things create a Matrix whose entries >>> are given by >>> >>> L_ij = -max(0, A_ij, A_ji) for i neq j >>> >>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>> >>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>> >>> I was wondering how to do this. My first search brought me to >>> >>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>> >>> >>> but this just goes over the rows of one matrix to set new values and now >>> I would need to run over the rows and columns of the matrix. My Idea was >>> to just create a transpose of A and do the same but then the row-layout >>> will be different and I can't use the same for loop for A and AT and >>> thus also won't be able to calculate the max's above. >>> >>> Any help would be appreciated >>> >> >> I think it would likely be much easier to write your algorithm directly >> on the mesh, rather than using matrices, since the locality information is >> explicit with the mesh, but has to be reconstructed with the matrix. >> >> The problem here is that in parallel there would be no easy way to get >> the halo you need using a matrix. You >> really want the ghosted space for assembly, and that is provided by the >> DM objects. Does this make sense? >> Unless anybody in PETSc has a better idea. >> >> Thanks, >> >> Matt >> >> >>> Best regards >>> >>> Elias >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Mon Mar 23 07:41:06 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Mon, 23 Mar 2020 13:41:06 +0100 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: References: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> <740f614d-49da-1688-66fe-6c4debe9af04@gmail.com> Message-ID: <2a0ea327-c8e4-eb20-63a9-ecec0ab80ffa@gmail.com> Thanks I'll have a look at it. So I understand correctly, that purely algebraic is not the way to go through PETSc here? Cheers Elias On 23/03/2020 13:39, Matthew Knepley wrote: > On Mon, Mar 23, 2020 at 8:38 AM Elias Karabelas > > wrote: > > > On 23/03/2020 13:36, Matthew Knepley wrote: >> On Mon, Mar 23, 2020 at 8:31 AM Elias Karabelas >> > wrote: >> >> Dear Matt, >> >> I've just found this answer from 2014 >> >> https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html >> >> wondering if this would theoretically work. >> >> In serial certainly, I just don't see how it works in parallel >> since you might not own the row you need from the transpose. >> >> And the thing with this FCT-Schemes is, that they're build on >> purely algebraic considerations (like AMG) so I don't want to >> break it back down to mesh information if possible at all. >> >> The FEM-FCT I am familiar with from Lohner was phrased on a mesh. > > Can you give me a reference to that? I based my things on this > work > https://www.sciencedirect.com/science/article/pii/S0045782508003150#! > > Volker is of course great. I believe I was thinking of > https://onlinelibrary.wiley.com/doi/abs/10.1002/fld.1650071007 > > ? Thanks, > > ? ? Matt > > Best regards > > Elias > > >> >> ? Thanks, >> >> ? ? Matt >> >> Best regards >> >> Elias >> >> On 23/03/2020 13:02, Matthew Knepley wrote: >>> On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas >>> > >>> wrote: >>> >>> Dear Users, >>> >>> I want to implement a FCT (flux corrected transport) >>> scheme with PETSc. >>> To this end I have amongst other things create a Matrix >>> whose entries >>> are given by >>> >>> L_ij = -max(0, A_ij, A_ji) for i neq j >>> >>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>> >>> where Mat A is an (non-symmetric) Input Matrix created >>> beforehand. >>> >>> I was wondering how to do this. My first search brought >>> me to >>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>> >>> >>> >>> but this just goes over the rows of one matrix to set >>> new values and now >>> I would need to run over the rows and columns of the >>> matrix. My Idea was >>> to just create a transpose of A and do the same but then >>> the row-layout >>> will be different and I can't use the same for loop for >>> A and AT and >>> thus also won't be able to calculate the max's above. >>> >>> Any help would be appreciated >>> >>> >>> I think it would likely be much easier to write your >>> algorithm directly on the mesh, rather than using matrices, >>> since the locality information is explicit with the mesh, >>> but has to be reconstructed with the matrix. >>> >>> The problem here is that in parallel there would be no easy >>> way to get the halo you need using a matrix. You >>> really want the ghosted space for assembly, and that is >>> provided by the DM objects. Does this make sense? >>> Unless anybody in PETSc has a better idea. >>> >>> ? Thanks, >>> >>> ? ? ?Matt >>> >>> Best regards >>> >>> Elias >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 23 07:56:20 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2020 08:56:20 -0400 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <2a0ea327-c8e4-eb20-63a9-ecec0ab80ffa@gmail.com> References: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> <740f614d-49da-1688-66fe-6c4debe9af04@gmail.com> <2a0ea327-c8e4-eb20-63a9-ecec0ab80ffa@gmail.com> Message-ID: On Mon, Mar 23, 2020 at 8:41 AM Elias Karabelas wrote: > Thanks I'll have a look at it. So I understand correctly, that purely > algebraic is not the way to go through PETSc here? > You can make it work. You would have the same difficulty in any linear algebra package, namely that you need an overlapped decomposition of the matrix, which no package does by default. PETSc does it for ASM, so you could use those routines to get what you want. Thanks, Matt > Cheers > > Elias > On 23/03/2020 13:39, Matthew Knepley wrote: > > On Mon, Mar 23, 2020 at 8:38 AM Elias Karabelas > wrote: > >> >> On 23/03/2020 13:36, Matthew Knepley wrote: >> >> On Mon, Mar 23, 2020 at 8:31 AM Elias Karabelas >> wrote: >> >>> Dear Matt, >>> >>> I've just found this answer from 2014 >>> >>> https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html >>> >>> wondering if this would theoretically work. >>> >> In serial certainly, I just don't see how it works in parallel since you >> might not own the row you need from the transpose. >> >>> And the thing with this FCT-Schemes is, that they're build on purely >>> algebraic considerations (like AMG) so I don't want to break it back down >>> to mesh information if possible at all. >>> >> The FEM-FCT I am familiar with from Lohner was phrased on a mesh. >> >> Can you give me a reference to that? I based my things on this work >> https://www.sciencedirect.com/science/article/pii/S0045782508003150#! >> > Volker is of course great. I believe I was thinking of > https://onlinelibrary.wiley.com/doi/abs/10.1002/fld.1650071007 > > Thanks, > > Matt > >> Best regards >> >> Elias >> >> >> >> Thanks, >> >> Matt >> >>> Best regards >>> >>> Elias >>> On 23/03/2020 13:02, Matthew Knepley wrote: >>> >>> On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas < >>> karabelaselias at gmail.com> wrote: >>> >>>> Dear Users, >>>> >>>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>>> To this end I have amongst other things create a Matrix whose entries >>>> are given by >>>> >>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>> >>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>> >>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>>> >>>> I was wondering how to do this. My first search brought me to >>>> >>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>> >>>> >>>> but this just goes over the rows of one matrix to set new values and >>>> now >>>> I would need to run over the rows and columns of the matrix. My Idea >>>> was >>>> to just create a transpose of A and do the same but then the row-layout >>>> will be different and I can't use the same for loop for A and AT and >>>> thus also won't be able to calculate the max's above. >>>> >>>> Any help would be appreciated >>>> >>> >>> I think it would likely be much easier to write your algorithm directly >>> on the mesh, rather than using matrices, since the locality information is >>> explicit with the mesh, but has to be reconstructed with the matrix. >>> >>> The problem here is that in parallel there would be no easy way to get >>> the halo you need using a matrix. You >>> really want the ghosted space for assembly, and that is provided by the >>> DM objects. Does this make sense? >>> Unless anybody in PETSc has a better idea. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Best regards >>>> >>>> Elias >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From berend.vanwachem at ovgu.de Mon Mar 23 08:08:41 2020 From: berend.vanwachem at ovgu.de (Berend van Wachem) Date: Mon, 23 Mar 2020 14:08:41 +0100 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: <9c119245-402f-7efb-3739-6fc02767e95a@ovgu.de> Dear Matt, It seems I judged to early. The solution works when the mesh is refined, and for multiple times refinement, but does not work for re-coarsening. I have created a working example where I show this, please find it attached. Starting from line 190, the forest is refined. A call to the routine "AdaptVector" then does a similar refinement for another vector created on a cloned DM with a different section . Starting from line 208, the forest is coarsened. The code crashes in transferring the vector from the finer dmForest to the coarser one. If I write "DM_ADAPT_REFINE" instead of "DM_ADAPT_COARSEN" on line 211, everything works fine. So there seems to be an issue when coarsening. The error I get is: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] DMPlexTransferVecTree_Inject line 4330 /usr/local/petsc-3.12.4/src/dm/impls/plex/plextree.c [0]PETSC ERROR: [0] DMPlexTransferVecTree line 4505 /usr/local/petsc-3.12.4/src/dm/impls/plex/plextree.c [0]PETSC ERROR: [0] DMForestTransferVec_p8est line 4829 /usr/local/petsc-3.12.4/src/dm/impls/forest/p4est/pforest.c [0]PETSC ERROR: [0] DMForestTransferVec line 997 /usr/local/petsc-3.12.4/src/dm/impls/forest/forest.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Signal received [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.12.4, Feb, 04, 2020 [0]PETSC ERROR: ./workingexample on a linux-gcc-dev named multiflow.multiflow.org by berend Mon Mar 23 14:05:17 2020 [0]PETSC ERROR: Configure options --download-metis=yes --download-parmetis=yes --download-hdf5=yes --download-p4est=yes --with-zlib-include=/usr/include --with-zlib-lib=/usr/lib64/libz.a --download-triangle=yes --download-ctetgen=yes [0]PETSC ERROR: #1 User provided function() line 0 in unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 Do you have any ideas? Thanks, I really appreciate it. Best wishes, Berend. On 2020-03-19 23:39, Matthew Knepley wrote: > Okay this runs for me. > > ? Thanks, > > ? ? Matt > > On Thu, Mar 19, 2020 at 6:07 PM Matthew Knepley > wrote: > > On Fri, Mar 13, 2020 at 9:45 AM Berend van Wachem > > wrote: > > Dear Matt, > > Thanks for your response. My understanding of the DM and DMClone > is the > same - and I have tested this with a DMPLEX DM without problems. > > However, for some reason, I cannot change/set the section of a > P4EST dm. > In the attached example code, I get an error in line 140, where > I try to > create a new section from the cloned P4EST DM. Is it not > possible to > create/set a section on a P4EST DM? Or maybe I am doing > something else > wrong? Do you suggest a workaround? > > > Hi Berend, > > Sorry I am behind. The problem on line 140 is that you call a DMPlex > function (DMPlexCreateSection) > with a DMForest object. That is illegal. You can, however, call that > function using the Plex you get from > a Forest using DMConvert(DMForestClone, DMPLEX, &plexClone). I will > get your code running as soon > as I can, but after you create the Section, attaching it should be fine. > > ? Thanks, > > ? ? ?Matt > > Many thanks, Berend. > > > On 2020-03-13 00:19, Matthew Knepley wrote: > > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem > > > >> wrote: > > > >? ? ?Dear All, > > > >? ? ?I have started to use DMPLEX with P4EST for a > computational fluid > >? ? ?dynamics application.?I am solving a coupled system of 4 > discretised > >? ? ?equations (for 3 velocity components and one pressure) on > a mesh. > >? ? ?However, next to these 4 variables, I also have a few > single field > >? ? ?variables (such as density and viscosity) defined over > the mesh, > >? ? ?which I > >? ? ?don't solve for (they should not be part of the matrix > with unknowns). > >? ? ?Most of these variables are at the cell centers, but in a > few cases, it > >? ? ?want to define them at cell faces. > > > >? ? ?With just DMPLEX, I solve this by: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMCreateGlobalVector (and Matrix), so I get an Unknown > vector, a RHS > >? ? ?vector, and a matrix for the 4 variables. > > > >? ? ?To get a vector for a single variable at the cell center > or the cell > >? ? ?face, I clone the original DM, I define a new Section on > it, and then > >? ? ?create the vector from that which I need (e.g. for density, > >? ? ?viscosity or > >? ? ?a velocity at the cell face). > > > >? ? ?Then I loop over the mesh, and with MatSetValuesLocal, I > set the > >? ? ?coefficients. After that, I solve the system for multiple > timesteps > >? ? ?(sequential solves) and get the solution vector with the > 4 variables > >? ? ?after each solve. > > > >? ? ?So-far, this works fine with DMPLEX. However, now I want > to use P4EST, > >? ? ?and I have difficulty defining a variable vector other > than the > >? ? ?original 4. > > > >? ? ?I have changed the code structure: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMForestSetBaseDM(DM, DMForest) to create a DMForest > >? ? ?DMCreateGlobalVector (and Matrix), so I get a Unknown > vector, a RHS > >? ? ?vector, and a matrix for the 4 variables > > > >? ? ?then I perform multiple time-steps, > >? ? ? ? ?DMForestTemplate(DMForest -> ?DMForestPost) > >? ? ? ? ?Adapt DMForestPost > >? ? ? ? ?DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > >? ? ? ? ?DMForestTransferVec(UnknownVector , RefinedUnknownVector) > >? ? ? ? ?DMForestPost -> DMForest > >? ? ?and then DMConvert(DMForest,DMPLEX,DM) > >? ? ?and I can solve the system as usual. That also seems to work. > > > >? ? ?But my conceptual question: how can I convert the other > variable > >? ? ?vectors > >? ? ?(obtained with a different section on the same DM) such > as density and > >? ? ?viscosity and faceVelocity within this framework? > > > > > > Here is my current thinking about DMs. A DM is a function space > > overlaying a topology. Much to my dismay, we > > do not have a topology object, so it hides inside DM. > DMClone() creates > > a shallow copy of the topology. We use > > this to have any number of data layouts through PetscSection, > laying > > over the same underlying topology. > > > > So for each layout you have, make a separate clone. Then > things like > > TransferVec() will respond to the layout in > > that clone. Certainly it works this way in Plex. I admit to > not having > > tried this for TransferVec(), but let me know if > > you have any problems. > > > > BTW, I usually use a dm for the solution, which I give to the > solver, > > say SNESSetDM(snes, dm), and then clone > > it as dmAux which has the layout for all the auxiliary fields > that are > > not involved in the solve. The Plex examples > > all use this form. > > > >? ? Thanks, > > > >? ? ? ?Matt > > > >? ? ?The DMForest has the same Section as the original DM and > will thus have > >? ? ?the space for exactly 4 variables per cell. I tried > pushing another > >? ? ?section on the DMForest and DMForestPost, but that does > not seem to > >? ? ?work. Please find attached a working example with code to > do this, > >? ? ?but I > >? ? ?get the error: > > > >? ? ?PETSC ERROR: PetscSectionGetChart() line 513 in > > > ?/usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c > Wrong > >? ? ?type of object: Parameter # 1 > > > >? ? ?So, I is there a way to DMForestTransferVec my other > vectors from one > >? ? ?DMForest to DMForestPost. How can I do this? > > > >? ? ?Many thanks for your help! > > > >? ? ?Best wishes, Berend. > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results > to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- A non-text attachment was scrubbed... Name: dmplexp4est-2.c Type: text/x-csrc Size: 9128 bytes Desc: not available URL: From knepley at gmail.com Mon Mar 23 08:24:54 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2020 09:24:54 -0400 Subject: [petsc-users] node DG with DMPlex In-Reply-To: <0526eb34-b4ce-19c4-4f76-81d2cd41cd45@univ-amu.fr> References: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> <0526eb34-b4ce-19c4-4f76-81d2cd41cd45@univ-amu.fr> Message-ID: On Wed, Mar 18, 2020 at 12:58 PM Yann Jobic wrote: > Hi matt, > > Le 3/17/2020 ? 4:00 PM, Matthew Knepley a ?crit : > > On Mon, Mar 16, 2020 at 5:20 PM Yann Jobic > > wrote: > > > > Hi all, > > > > I would like to implement a nodal DG with the DMPlex interface. > > Therefore, i must add the internal nodes to the DM (GLL nodes), with > > the > > constrains : > > 1) Add them as solution points, with correct coordinates (and keep > the > > good rotational ordering) > > 2) Find the shared nodes at faces in order to compute the fluxes > > 3) For parallel use, so synchronize the ghost node at each time steps > > > > > > Let me get the fundamentals straight before advising, since I have never > > implemented nodal DG. > > > > 1) What is shared? > I need to duplicate an edge in 2D, or a facet in 3D, and to sync it > after a time step, in order to compute the numerical fluxes > (Lax-Friedrichs at the beginning). > I should have been more specific, but I think I see what you want. You do not "share" unknowns between cells, so all unknowns should be associated with some cell in the Section. You think of some cell unknowns as being "connected" to a face, so when you want to calculate a flux, you need the unknowns from the adjacent cell in order to do it. In order to do this, I would partition with overlap=1, which is what we do for finite volume, which has the same adjacency needs. You might also set https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetAdjacency.html to PETSC_TRUE, PETSC_FALSE, but you are probably doing everything matrix-free if you are using DG. The above is optimal for FV, but not for DG because you communicate more than you absolutely have to. A more complicated, but optimal, thing to do would be to assign interior dofs to the cell, and two sets of dofs to each face, one for each cell. Then you only communicate the face dofs. Its just more bookkeeping for you, but it will work in parallel just fine. I don't think you need extra vertices, or coordinates, and for output I recommend using DMPlexProject() to get the solution in some space that can be plotted like P1, or anything else supported by your visualization. Thanks, Matt > > > We have an implementation of spectral element ordering > > ( > https://gitlab.com/petsc/petsc/-/blob/master/src/dm/impls/plex/examples/tutorials/ex6.c). > > > Those share > > the whole element boundary. > > > > 2) What ghosts do you need? > In order to compute the numerical fluxes of one element, i need the > values of the surrounding nodes connected to the adjacent elements. > > > > 3) You want to store real space coordinates for a quadrature? > It should be basically the same as PetscFE of higher order. > I add some vertex needed to compute a polynomal solution of the desired > order. That means that if i have a N, order of the local approximation, > i need 0.5*(N+1)*(N+2) vertex to store in the DMPlex (in 2D), in order to : > 1) have the correct number of dof > 2) use ghost nodes to sync the values of the vertex/edge/facet for > 1D/2D/3D problem > 2) save correctly the solution > > Does it make sense to you ? > > Maybe like > > https://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ex11.c.html > With the use of the function SplitFaces, which i didn't fully understood > so far. > > Thanks, > > Yann > > > > > We usually define a quadrature on the reference element once. > > > > Thanks, > > > > Matt > > > > I found elements of answers in those threads : > > > https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html > > > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html > > > > However, it's not clear for me where to begin. > > > > Quoting Matt, i should : > > " DMGetCoordinateDM(dm, &cdm); > > > > DMCreateLocalVector(cdm, &coordinatesLocal); > > > > DMSetCoordinatesLocal(dm, coordinatesLocal);" > > > > However, i will not create ghost nodes this way. And i'm not sure to > > keep the good ordering. > > This part should be implemented in the PetscFE interface, for high > > order > > discrete solutions. > > I did not succeed in finding the correct part of the source doing it. > > > > Could you please give me some hint to begin correctly thoses tasks ? > > > > Thanks, > > > > Yann > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ < > http://www.cse.buffalo.edu/~knepley/> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Mon Mar 23 09:17:28 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Mon, 23 Mar 2020 15:17:28 +0100 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: References: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> <740f614d-49da-1688-66fe-6c4debe9af04@gmail.com> <2a0ea327-c8e4-eb20-63a9-ecec0ab80ffa@gmail.com> Message-ID: <09b9fae4-3a95-5ffa-4085-16c3bde7247b@gmail.com> Ok I'll try to decipher that. Thought that I would maybe find something in the GAMG routines but I'll be happy to scroll through the ASM stuff :D On 23/03/2020 13:56, Matthew Knepley wrote: > On Mon, Mar 23, 2020 at 8:41 AM Elias Karabelas > > wrote: > > Thanks I'll have a look at it. So I understand correctly, that > purely algebraic is not the way to go through PETSc here? > > You can make it work. You would have the same difficulty in any linear > algebra package, namely that you need > an overlapped decomposition of the matrix, which no package does by > default. PETSc does it for ASM, so you could > use those routines to get what you want. > > ? Thanks, > > ? ? ?Matt > > Cheers > > Elias > > On 23/03/2020 13:39, Matthew Knepley wrote: >> On Mon, Mar 23, 2020 at 8:38 AM Elias Karabelas >> > wrote: >> >> >> On 23/03/2020 13:36, Matthew Knepley wrote: >>> On Mon, Mar 23, 2020 at 8:31 AM Elias Karabelas >>> > >>> wrote: >>> >>> Dear Matt, >>> >>> I've just found this answer from 2014 >>> >>> https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html >>> >>> wondering if this would theoretically work. >>> >>> In serial certainly, I just don't see how it works in >>> parallel since you might not own the row you need from the >>> transpose. >>> >>> And the thing with this FCT-Schemes is, that they're >>> build on purely algebraic considerations (like AMG) so I >>> don't want to break it back down to mesh information if >>> possible at all. >>> >>> The FEM-FCT I am familiar with from Lohner was phrased on a >>> mesh. >> >> Can you give me a reference to that? I based my things on >> this work >> https://www.sciencedirect.com/science/article/pii/S0045782508003150#! >> >> Volker is of course great. I believe I was thinking of >> https://onlinelibrary.wiley.com/doi/abs/10.1002/fld.1650071007 >> >> ? Thanks, >> >> ? ? Matt >> >> Best regards >> >> Elias >> >> >>> >>> ? Thanks, >>> >>> ? ? Matt >>> >>> Best regards >>> >>> Elias >>> >>> On 23/03/2020 13:02, Matthew Knepley wrote: >>>> On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas >>>> >>> > wrote: >>>> >>>> Dear Users, >>>> >>>> I want to implement a FCT (flux corrected >>>> transport) scheme with PETSc. >>>> To this end I have amongst other things create a >>>> Matrix whose entries >>>> are given by >>>> >>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>> >>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>> >>>> where Mat A is an (non-symmetric) Input Matrix >>>> created beforehand. >>>> >>>> I was wondering how to do this. My first search >>>> brought me to >>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>> >>>> >>>> >>>> but this just goes over the rows of one matrix to >>>> set new values and now >>>> I would need to run over the rows and columns of >>>> the matrix. My Idea was >>>> to just create a transpose of A and do the same but >>>> then the row-layout >>>> will be different and I can't use the same for loop >>>> for A and AT and >>>> thus also won't be able to calculate the max's above. >>>> >>>> Any help would be appreciated >>>> >>>> >>>> I think it would likely be much easier to write your >>>> algorithm directly on the mesh, rather than using >>>> matrices, since the locality information is explicit >>>> with the mesh, but has to be reconstructed with the matrix. >>>> >>>> The problem here is that in parallel there would be no >>>> easy way to get the halo you need using a matrix. You >>>> really want the ghosted space for assembly, and that is >>>> provided by the DM objects. Does this make sense? >>>> Unless anybody in PETSc has a better idea. >>>> >>>> ? Thanks, >>>> >>>> ? ? ?Matt >>>> >>>> Best regards >>>> >>>> Elias >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they >>>> begin their experiments is infinitely more interesting >>>> than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Mar 23 09:42:49 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 23 Mar 2020 08:42:49 -0600 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: References: Message-ID: <87d0932kuu.fsf@jedbrown.org> Elias Karabelas writes: > Dear Users, > > I want to implement a FCT (flux corrected transport) scheme with PETSc. > To this end I have amongst other things create a Matrix whose entries > are given by > > L_ij = -max(0, A_ij, A_ji) for i neq j > > L_ii = Sum_{j=0,..n, j neq i} L_ij > > where Mat A is an (non-symmetric) Input Matrix created beforehand. > > I was wondering how to do this. My first search brought me to > https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html > > > but this just goes over the rows of one matrix to set new values and now > I would need to run over the rows and columns of the matrix. My Idea was > to just create a transpose of A and do the same but then the row-layout > will be different and I can't use the same for loop for A and AT and > thus also won't be able to calculate the max's above. Does your matrix have symmetric nonzero structure? (It's typical for finite element methods.) If so, all the indices will match up so I think you can do something like: for (row=rowstart; row References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> Message-ID: <87a7472kcb.fsf@jedbrown.org> Thanks; please don't drop the list. I'd be curious whether this operation is common enough that we should add it to PETSc. My hesitance has been that people may want many different variants when working with systems of equations, for example. Elias Karabelas writes: > Dear Jed, > > Yes the Matrix A comes from assembling a FEM-convection-diffusion > operator over a tetrahedral mesh. So my matrix graph should be > symmetric. Thanks for the snippet > > On 23/03/2020 15:42, Jed Brown wrote: >> Elias Karabelas writes: >> >>> Dear Users, >>> >>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>> To this end I have amongst other things create a Matrix whose entries >>> are given by >>> >>> L_ij = -max(0, A_ij, A_ji) for i neq j >>> >>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>> >>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>> >>> I was wondering how to do this. My first search brought me to >>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>> >>> >>> but this just goes over the rows of one matrix to set new values and now >>> I would need to run over the rows and columns of the matrix. My Idea was >>> to just create a transpose of A and do the same but then the row-layout >>> will be different and I can't use the same for loop for A and AT and >>> thus also won't be able to calculate the max's above. >> Does your matrix have symmetric nonzero structure? (It's typical for >> finite element methods.) >> >> If so, all the indices will match up so I think you can do something like: >> >> for (row=rowstart; row> PetscScalar Lvals[MAX_LEN]; >> PetscInt diag; >> MatGetRow(A, row, &ncols, &cols, &vals); >> MatGetRow(At, row, &ncolst, &colst, &valst); >> assert(ncols == ncolst); // symmetric structure >> PetscScalar sum = 0; >> for (c=0; c> assert(cols[c] == colst[c]); // symmetric structure >> if (cols[c] == row) diag = c; >> else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); >> } >> Lvals[diag] = sum; >> MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); >> MatRestoreRow(A, row, &ncols, &cols, &vals); >> MatRestoreRow(At, row, &ncolst, &colst, &valst); >> } From karabelaselias at gmail.com Mon Mar 23 09:56:50 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Mon, 23 Mar 2020 15:56:50 +0100 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <87a7472kcb.fsf@jedbrown.org> References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> <87a7472kcb.fsf@jedbrown.org> Message-ID: Yeah I'll let you know how it turns out, for the moment I'm aiming at scaler convection-diffusion equations for simulating the evolution of a tracer concentration in a fluid with a precalculated velocity field from a NS-Simulation. On 23/03/2020 15:53, Jed Brown wrote: > Thanks; please don't drop the list. > > I'd be curious whether this operation is common enough that we should > add it to PETSc. My hesitance has been that people may want many > different variants when working with systems of equations, for example. > > Elias Karabelas writes: > >> Dear Jed, >> >> Yes the Matrix A comes from assembling a FEM-convection-diffusion >> operator over a tetrahedral mesh. So my matrix graph should be >> symmetric. Thanks for the snippet >> >> On 23/03/2020 15:42, Jed Brown wrote: >>> Elias Karabelas writes: >>> >>>> Dear Users, >>>> >>>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>>> To this end I have amongst other things create a Matrix whose entries >>>> are given by >>>> >>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>> >>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>> >>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>>> >>>> I was wondering how to do this. My first search brought me to >>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>> >>>> >>>> but this just goes over the rows of one matrix to set new values and now >>>> I would need to run over the rows and columns of the matrix. My Idea was >>>> to just create a transpose of A and do the same but then the row-layout >>>> will be different and I can't use the same for loop for A and AT and >>>> thus also won't be able to calculate the max's above. >>> Does your matrix have symmetric nonzero structure? (It's typical for >>> finite element methods.) >>> >>> If so, all the indices will match up so I think you can do something like: >>> >>> for (row=rowstart; row>> PetscScalar Lvals[MAX_LEN]; >>> PetscInt diag; >>> MatGetRow(A, row, &ncols, &cols, &vals); >>> MatGetRow(At, row, &ncolst, &colst, &valst); >>> assert(ncols == ncolst); // symmetric structure >>> PetscScalar sum = 0; >>> for (c=0; c>> assert(cols[c] == colst[c]); // symmetric structure >>> if (cols[c] == row) diag = c; >>> else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); >>> } >>> Lvals[diag] = sum; >>> MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); >>> MatRestoreRow(A, row, &ncols, &cols, &vals); >>> MatRestoreRow(At, row, &ncolst, &colst, &valst); >>> } From mfadams at lbl.gov Mon Mar 23 13:18:05 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 23 Mar 2020 14:18:05 -0400 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <09b9fae4-3a95-5ffa-4085-16c3bde7247b@gmail.com> References: <3d39cacb-4803-78d5-4b61-b7cefa648457@gmail.com> <740f614d-49da-1688-66fe-6c4debe9af04@gmail.com> <2a0ea327-c8e4-eb20-63a9-ecec0ab80ffa@gmail.com> <09b9fae4-3a95-5ffa-4085-16c3bde7247b@gmail.com> Message-ID: On Mon, Mar 23, 2020 at 10:18 AM Elias Karabelas wrote: > Ok I'll try to decipher that. Thought that I would maybe find something in > the GAMG routines > Not much, GAMG symmetrizes the graph (G) for coarse grid constriction, where G_ij must equal G_ji in parallel. G is a graph derived from A or (A+A^T). but I'll be happy to scroll through the ASM stuff :D > On 23/03/2020 13:56, Matthew Knepley wrote: > > On Mon, Mar 23, 2020 at 8:41 AM Elias Karabelas > wrote: > >> Thanks I'll have a look at it. So I understand correctly, that purely >> algebraic is not the way to go through PETSc here? >> > You can make it work. You would have the same difficulty in any linear > algebra package, namely that you need > an overlapped decomposition of the matrix, which no package does by > default. PETSc does it for ASM, so you could > use those routines to get what you want. > > Thanks, > > Matt > >> Cheers >> >> Elias >> On 23/03/2020 13:39, Matthew Knepley wrote: >> >> On Mon, Mar 23, 2020 at 8:38 AM Elias Karabelas >> wrote: >> >>> >>> On 23/03/2020 13:36, Matthew Knepley wrote: >>> >>> On Mon, Mar 23, 2020 at 8:31 AM Elias Karabelas < >>> karabelaselias at gmail.com> wrote: >>> >>>> Dear Matt, >>>> >>>> I've just found this answer from 2014 >>>> >>>> https://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022450.html >>>> >>>> wondering if this would theoretically work. >>>> >>> In serial certainly, I just don't see how it works in parallel since you >>> might not own the row you need from the transpose. >>> >>>> And the thing with this FCT-Schemes is, that they're build on purely >>>> algebraic considerations (like AMG) so I don't want to break it back down >>>> to mesh information if possible at all. >>>> >>> The FEM-FCT I am familiar with from Lohner was phrased on a mesh. >>> >>> Can you give me a reference to that? I based my things on this work >>> https://www.sciencedirect.com/science/article/pii/S0045782508003150#! >>> >> Volker is of course great. I believe I was thinking of >> https://onlinelibrary.wiley.com/doi/abs/10.1002/fld.1650071007 >> >> Thanks, >> >> Matt >> >>> Best regards >>> >>> Elias >>> >>> >>> >>> Thanks, >>> >>> Matt >>> >>>> Best regards >>>> >>>> Elias >>>> On 23/03/2020 13:02, Matthew Knepley wrote: >>>> >>>> On Mon, Mar 23, 2020 at 7:46 AM Elias Karabelas < >>>> karabelaselias at gmail.com> wrote: >>>> >>>>> Dear Users, >>>>> >>>>> I want to implement a FCT (flux corrected transport) scheme with >>>>> PETSc. >>>>> To this end I have amongst other things create a Matrix whose entries >>>>> are given by >>>>> >>>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>>> >>>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>>> >>>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>>>> >>>>> I was wondering how to do this. My first search brought me to >>>>> >>>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>>> >>>>> >>>>> but this just goes over the rows of one matrix to set new values and >>>>> now >>>>> I would need to run over the rows and columns of the matrix. My Idea >>>>> was >>>>> to just create a transpose of A and do the same but then the >>>>> row-layout >>>>> will be different and I can't use the same for loop for A and AT and >>>>> thus also won't be able to calculate the max's above. >>>>> >>>>> Any help would be appreciated >>>>> >>>> >>>> I think it would likely be much easier to write your algorithm directly >>>> on the mesh, rather than using matrices, since the locality information is >>>> explicit with the mesh, but has to be reconstructed with the matrix. >>>> >>>> The problem here is that in parallel there would be no easy way to get >>>> the halo you need using a matrix. You >>>> really want the ghosted space for assembly, and that is provided by the >>>> DM objects. Does this make sense? >>>> Unless anybody in PETSc has a better idea. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Best regards >>>>> >>>>> Elias >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xliu29 at ncsu.edu Mon Mar 23 19:30:37 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Mon, 23 Mar 2020 17:30:37 -0700 Subject: [petsc-users] About the interpolation and restriction matrix for cell-centered multigrid. Message-ID: Hi, all, I want to confirm one thing about the interpolation and restrict matrix for the cell-centered multigrid. I am running ex32.c. The following is my understanding. For cell-centered multigrid, only DMDA_Q0 can be set as the interpolation type. For interpolation from the coarse mesh, the values for the 4 finer cells are set equal to that of coarse cell. Then the restriction matrix is the inverse of the interpolation one for Galerkin type. If I want to use the bilinear interpolation, I need to code the subrotuine myself, right? Please double check whether my understanding is right. Thanks A LOT. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 -------------- next part -------------- An HTML attachment was scrubbed... URL: From perceval.desforges at polytechnique.edu Tue Mar 24 05:07:47 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Tue, 24 Mar 2020 11:07:47 +0100 Subject: [petsc-users] Calculating inertias Message-ID: Dear petsc developers, I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? Best regards, Perceval, P.S. my last email seems to not have been sent (I couldn't find it in the archives) so I am trying again... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Tue Mar 24 05:15:17 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 24 Mar 2020 11:15:17 +0100 Subject: [petsc-users] Calculating inertias In-Reply-To: References: Message-ID: You can do this directly in PETSc. Create a KSP object with PREONLY and PCCHOLESKY (or just a PC object). Then call KSPSetUp (or PCSetUp) and extract the factored matrix with PCFactorGetMatrix(). Then call MatGetInertia() on the factored matrix. Repeat this for each value of E. I guess it can be even shorter if you call MatCholeskyFactor() directly. Jose > El 24 mar 2020, a las 11:07, Perceval Desforges escribi?: > > Dear petsc developers, > > I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. > > In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? > > Best regards, > > Perceval, > > P.S. my last email seems to not have been sent (I couldn't find it in the archives) so I am trying again... > From mfadams at lbl.gov Tue Mar 24 08:43:21 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 24 Mar 2020 09:43:21 -0400 Subject: [petsc-users] About the interpolation and restriction matrix for cell-centered multigrid. In-Reply-To: References: Message-ID: Good question. It does look like there is Q1: src/dm/impls/da/da.c:- ctype - DMDA_Q1 and DMDA_Q0 are currently the only supported forms And in looking at a cell centered example src/snes/examples/tutorials/ex20.c, it looks like only DMDA_Q1 works. I get an error when I set it to DMDA_Q0 (DMDA_Q1 is the default). This is puzzling, Q0 is natural in cell centered. I am not familiar with DMDA and I don't understand why, from ex20, that you have an odd number of points on a cell centered grid and an even number for vertex centered (eg, ex14). I would think that it should be the opposite. I'm puzzled, Mark On Mon, Mar 23, 2020 at 8:32 PM Xiaodong Liu wrote: > Hi, all, > > I want to confirm one thing about the interpolation and restrict matrix > for the cell-centered multigrid. > > I am running ex32.c. The following is my understanding. > > For cell-centered multigrid, only DMDA_Q0 can be set as the interpolation > type. > For interpolation from the coarse mesh, the values for the 4 finer cells > are set equal to that of coarse cell. Then the restriction matrix is the > inverse of the interpolation one for Galerkin type. > > If I want to use the bilinear interpolation, I need to code the subrotuine > myself, right? > > Please double check whether my understanding is right. > > Thanks A LOT. > Xiaodong Liu, PhD > X: Computational Physics Division > Los Alamos National Laboratory > P.O. Box 1663, > Los Alamos, NM 87544 > 505-709-0534 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perceval.desforges at polytechnique.edu Mon Mar 23 12:23:35 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Mon, 23 Mar 2020 18:23:35 +0100 Subject: [petsc-users] *****SPAM*****Calculating inertias Message-ID: <331f4a9427e13063bca7a553976c227f@polytechnique.edu> Dear petsc developers, I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? Best regards, Perceval, -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Mar 24 10:22:33 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 24 Mar 2020 09:22:33 -0600 Subject: [petsc-users] About the interpolation and restriction matrix for cell-centered multigrid. In-Reply-To: References: Message-ID: <87ftdx22x2.fsf@jedbrown.org> Mark Adams writes: > Good question. It does look like there is Q1: > > src/dm/impls/da/da.c:- ctype - DMDA_Q1 and DMDA_Q0 are currently the only > supported forms > > And in looking at a cell centered > example src/snes/examples/tutorials/ex20.c, it looks like only DMDA_Q1 > works. I get an error when I set it to DMDA_Q0 (DMDA_Q1 is the default). > This is puzzling, Q0 is natural in cell centered. The comments in those examples are kinda wrong -- they never told the DM it was cell-centered so it uses a multigrid that isn't compatible with the boundary conditions. The interpolation is Q1 on the dual grid, not conservative Q1 on cells. > I am not familiar with DMDA and I don't understand why, from ex20, that you > have an odd number of points on a cell centered grid and an even number for > vertex centered (eg, ex14). I would think that it should be the opposite. The example is bad. From xliu29 at ncsu.edu Tue Mar 24 10:40:51 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Tue, 24 Mar 2020 08:40:51 -0700 Subject: [petsc-users] About the interpolation and restriction matrix for cell-centered multigrid. In-Reply-To: <87ftdx22x2.fsf@jedbrown.org> References: <87ftdx22x2.fsf@jedbrown.org> Message-ID: Thanks, Mark and Jed. It is very helpful. So, for present, Petsc doesn't support Q1 interperpolation for cell-centered multigrid, right? Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Tue, Mar 24, 2020 at 8:22 AM Jed Brown wrote: > Mark Adams writes: > > > Good question. It does look like there is Q1: > > > > src/dm/impls/da/da.c:- ctype - DMDA_Q1 and DMDA_Q0 are currently the > only > > supported forms > > > > And in looking at a cell centered > > example src/snes/examples/tutorials/ex20.c, it looks like only DMDA_Q1 > > works. I get an error when I set it to DMDA_Q0 (DMDA_Q1 is the default). > > This is puzzling, Q0 is natural in cell centered. > > The comments in those examples are kinda wrong -- they never told the DM > it was cell-centered so it uses a multigrid that isn't compatible with > the boundary conditions. The interpolation is Q1 on the dual grid, not > conservative Q1 on cells. > > > I am not familiar with DMDA and I don't understand why, from ex20, that > you > > have an odd number of points on a cell centered grid and an even number > for > > vertex centered (eg, ex14). I would think that it should be the opposite. > > The example is bad. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Mar 24 10:42:30 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 24 Mar 2020 09:42:30 -0600 Subject: [petsc-users] About the interpolation and restriction matrix for cell-centered multigrid. In-Reply-To: References: <87ftdx22x2.fsf@jedbrown.org> Message-ID: <87d09121zt.fsf@jedbrown.org> Xiaodong Liu writes: > Thanks, Mark and Jed. It is very helpful. > So, for present, Petsc doesn't support Q1 interperpolation for > cell-centered multigrid, right? DMDA does not, though you can create your own interpolation matrix, and patches are certainly welcome. From xliu29 at ncsu.edu Tue Mar 24 10:43:47 2020 From: xliu29 at ncsu.edu (Xiaodong Liu) Date: Tue, 24 Mar 2020 08:43:47 -0700 Subject: [petsc-users] About the interpolation and restriction matrix for cell-centered multigrid. In-Reply-To: <87d09121zt.fsf@jedbrown.org> References: <87ftdx22x2.fsf@jedbrown.org> <87d09121zt.fsf@jedbrown.org> Message-ID: Thanks, all. Very useful. Xiaodong Liu, PhD X: Computational Physics Division Los Alamos National Laboratory P.O. Box 1663, Los Alamos, NM 87544 505-709-0534 On Tue, Mar 24, 2020 at 8:42 AM Jed Brown wrote: > Xiaodong Liu writes: > > > Thanks, Mark and Jed. It is very helpful. > > So, for present, Petsc doesn't support Q1 interperpolation for > > cell-centered multigrid, right? > > DMDA does not, though you can create your own interpolation matrix, and > patches are certainly welcome. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perceval.desforges at polytechnique.edu Tue Mar 24 11:18:13 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Tue, 24 Mar 2020 17:18:13 +0100 Subject: [petsc-users] Calculating inertias In-Reply-To: References: Message-ID: <736ea0d3e54497123c4b770560813d30@polytechnique.edu> Thank you very much, this seems to work well. I have another question linked to this which is probably a bit basic so I apologize. At the end of my program, I want to destroy all the petsc objects. I start by destroying the matrixes, then the ksp and pc objects similarly to how it's done in the examples. However I get an error when attempting to destroy the KSP and PC objects : [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Wrong type of object: Parameter # 1 I've tried switching the order around, but I still get the same errors. And if I don't destroy these objects I get a memory leak, which I'd like to avoid. My question is I don't really understand when and how I'm supposed to destroy the KSP and PC objects? Thanks again, Best regards, Perceval, > You can do this directly in PETSc. Create a KSP object with PREONLY and PCCHOLESKY (or just a PC object). Then call KSPSetUp (or PCSetUp) and extract the factored matrix with PCFactorGetMatrix(). Then call MatGetInertia() on the factored matrix. Repeat this for each value of E. > > I guess it can be even shorter if you call MatCholeskyFactor() directly. > > Jose > >> El 24 mar 2020, a las 11:07, Perceval Desforges escribi?: >> >> Dear petsc developers, >> >> I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. >> >> In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? >> >> Best regards, >> >> Perceval, >> >> P.S. my last email seems to not have been sent (I couldn't find it in the archives) so I am trying again... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Tue Mar 24 11:22:27 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 24 Mar 2020 17:22:27 +0100 Subject: [petsc-users] Calculating inertias In-Reply-To: <736ea0d3e54497123c4b770560813d30@polytechnique.edu> References: <736ea0d3e54497123c4b770560813d30@polytechnique.edu> Message-ID: <758E61E6-C5A3-4C53-BCC0-BF1CA792C56D@dsic.upv.es> If you obtained the PC object with KSPGetPC() then you do not have to destroy it, only the KSP object. KSPGetPC() only gives a pointer to the internal object, which is managed by the KSP. Jose > El 24 mar 2020, a las 17:18, Perceval Desforges escribi?: > > Thank you very much, this seems to work well. > > I have another question linked to this which is probably a bit basic so I apologize. > > At the end of my program, I want to destroy all the petsc objects. I start by destroying the matrixes, then the ksp and pc objects similarly to how it's done in the examples. However I get an error when attempting to destroy the KSP and PC objects : > > [0]PETSC ERROR: Invalid argument > > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > > I've tried switching the order around, but I still get the same errors. > > And if I don't destroy these objects I get a memory leak, which I'd like to avoid. > > My question is I don't really understand when and how I'm supposed to destroy the KSP and PC objects? > > Thanks again, > > Best regards, > > Perceval, > > > > > >> You can do this directly in PETSc. Create a KSP object with PREONLY and PCCHOLESKY (or just a PC object). Then call KSPSetUp (or PCSetUp) and extract the factored matrix with PCFactorGetMatrix(). Then call MatGetInertia() on the factored matrix. Repeat this for each value of E. >> >> I guess it can be even shorter if you call MatCholeskyFactor() directly. >> >> Jose >> >> >>> El 24 mar 2020, a las 11:07, Perceval Desforges escribi?: >>> >>> Dear petsc developers, >>> >>> I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. >>> >>> In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? >>> >>> Best regards, >>> >>> Perceval, >>> >>> P.S. my last email seems to not have been sent (I couldn't find it in the archives) so I am trying again... >>> > > From perceval.desforges at polytechnique.edu Tue Mar 24 12:26:32 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Tue, 24 Mar 2020 18:26:32 +0100 Subject: [petsc-users] Calculating inertias In-Reply-To: <758E61E6-C5A3-4C53-BCC0-BF1CA792C56D@dsic.upv.es> References: <736ea0d3e54497123c4b770560813d30@polytechnique.edu> <758E61E6-C5A3-4C53-BCC0-BF1CA792C56D@dsic.upv.es> Message-ID: <9387344953c3b7f9327f558e36e4c066@polytechnique.edu> Thank you but I am still confused, I still get the same error message when calling KSPDestroy(). My code looks something like this: ierr = MatShift(M,-E); ierr = KSPCreate(PETSC_COMM_WORLD,&ksp); ierr = KSPSetOperators(ksp,M,M); ierr = KSPSetType(ksp,KSPPREONLY); ierr = KSPGetPC(ksp,&pc); ierr = PCSetType(pc,PCCHOLESKY); ierr = KSPSetUp(ksp); ierr = PCFactorGetMatrix(pc,&B); ierr = MatGetInertia(B,&nneg,&nzero,&npos); ierr = PetscPrintf(PETSC_COMM_WORLD,"nneg: %D nzero: %D npos: %D \n",nneg,nzero,npos); ierr = MatDestroy(&B); ierr = MatDestroy(&M); ierr = KSPDestroy(&ksp); where I've defined the matrix M beforehand. What am I doing wrong? Thanks again, Best regards, Perceval, > If you obtained the PC object with KSPGetPC() then you do not have to destroy it, only the KSP object. KSPGetPC() only gives a pointer to the internal object, which is managed by the KSP. > Jose > > El 24 mar 2020, a las 17:18, Perceval Desforges escribi?: > > Thank you very much, this seems to work well. > > I have another question linked to this which is probably a bit basic so I apologize. > > At the end of my program, I want to destroy all the petsc objects. I start by destroying the matrixes, then the ksp and pc objects similarly to how it's done in the examples. However I get an error when attempting to destroy the KSP and PC objects : > > [0]PETSC ERROR: Invalid argument > > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > > I've tried switching the order around, but I still get the same errors. > > And if I don't destroy these objects I get a memory leak, which I'd like to avoid. > > My question is I don't really understand when and how I'm supposed to destroy the KSP and PC objects? > > Thanks again, > > Best regards, > > Perceval, > > You can do this directly in PETSc. Create a KSP object with PREONLY and PCCHOLESKY (or just a PC object). Then call KSPSetUp (or PCSetUp) and extract the factored matrix with PCFactorGetMatrix(). Then call MatGetInertia() on the factored matrix. Repeat this for each value of E. > > I guess it can be even shorter if you call MatCholeskyFactor() directly. > > Jose > > El 24 mar 2020, a las 11:07, Perceval Desforges escribi?: > > Dear petsc developers, > > I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. > > In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? > > Best regards, > > Perceval, > > P.S. my last email seems to not have been sent (I couldn't find it in the archives) so I am trying again... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Tue Mar 24 12:32:33 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 24 Mar 2020 18:32:33 +0100 Subject: [petsc-users] Calculating inertias In-Reply-To: <9387344953c3b7f9327f558e36e4c066@polytechnique.edu> References: <736ea0d3e54497123c4b770560813d30@polytechnique.edu> <758E61E6-C5A3-4C53-BCC0-BF1CA792C56D@dsic.upv.es> <9387344953c3b7f9327f558e36e4c066@polytechnique.edu> Message-ID: <1B435522-BF15-406F-B10E-66014B4D9A56@dsic.upv.es> MatDestroy(&B) should not be called, for the same reason as the PC - it is owned by the PC object, it just gives you a pointer. This is stated in the manpage of PCFactorGetMatrix: Notes: Does not increase the reference count for the matrix so DO NOT destroy it As a general rule, objects returned by functions XXXGetYYY() should not be destroyed, while objects returned by functions XXXCreateYYY() should. Jose > El 24 mar 2020, a las 18:26, Perceval Desforges escribi?: > > Thank you but I am still confused, I still get the same error message when calling KSPDestroy(). My code looks something like this: > > > > ierr = MatShift(M,-E); > ierr = KSPCreate(PETSC_COMM_WORLD,&ksp); > ierr = KSPSetOperators(ksp,M,M); > ierr = KSPSetType(ksp,KSPPREONLY); > ierr = KSPGetPC(ksp,&pc); > ierr = PCSetType(pc,PCCHOLESKY); > ierr = KSPSetUp(ksp); > ierr = PCFactorGetMatrix(pc,&B); > ierr = MatGetInertia(B,&nneg,&nzero,&npos); > ierr = PetscPrintf(PETSC_COMM_WORLD,"nneg: %D nzero: %D npos: %D \n",nneg,nzero,npos); > ierr = MatDestroy(&B); > ierr = MatDestroy(&M); > ierr = KSPDestroy(&ksp); > > where I've defined the matrix M beforehand. What am I doing wrong? > > Thanks again, > > Best regards, > > Perceval, > > > >> If you obtained the PC object with KSPGetPC() then you do not have to destroy it, only the KSP object. KSPGetPC() only gives a pointer to the internal object, which is managed by the KSP. >> Jose >> >> >>> El 24 mar 2020, a las 17:18, Perceval Desforges escribi?: >>> >>> Thank you very much, this seems to work well. >>> >>> I have another question linked to this which is probably a bit basic so I apologize. >>> >>> At the end of my program, I want to destroy all the petsc objects. I start by destroying the matrixes, then the ksp and pc objects similarly to how it's done in the examples. However I get an error when attempting to destroy the KSP and PC objects : >>> >>> [0]PETSC ERROR: Invalid argument >>> >>> [0]PETSC ERROR: Wrong type of object: Parameter # 1 >>> >>> I've tried switching the order around, but I still get the same errors. >>> >>> And if I don't destroy these objects I get a memory leak, which I'd like to avoid. >>> >>> My question is I don't really understand when and how I'm supposed to destroy the KSP and PC objects? >>> >>> Thanks again, >>> >>> Best regards, >>> >>> Perceval, >>> >>> >>> >>> >>> >>>> You can do this directly in PETSc. Create a KSP object with PREONLY and PCCHOLESKY (or just a PC object). Then call KSPSetUp (or PCSetUp) and extract the factored matrix with PCFactorGetMatrix(). Then call MatGetInertia() on the factored matrix. Repeat this for each value of E. >>>> >>>> I guess it can be even shorter if you call MatCholeskyFactor() directly. >>>> >>>> Jose >>>> >>>> >>>>> El 24 mar 2020, a las 11:07, Perceval Desforges escribi?: >>>>> >>>>> Dear petsc developers, >>>>> >>>>> I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. >>>>> >>>>> In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? >>>>> >>>>> Best regards, >>>>> >>>>> Perceval, >>>>> >>>>> P.S. my last email seems to not have been sent (I couldn't find it in the archives) so I am trying again... >>>>> >>> >>> > > From perceval.desforges at polytechnique.edu Tue Mar 24 13:26:44 2020 From: perceval.desforges at polytechnique.edu (Perceval Desforges) Date: Tue, 24 Mar 2020 19:26:44 +0100 Subject: [petsc-users] Calculating inertias In-Reply-To: <1B435522-BF15-406F-B10E-66014B4D9A56@dsic.upv.es> References: <736ea0d3e54497123c4b770560813d30@polytechnique.edu> <758E61E6-C5A3-4C53-BCC0-BF1CA792C56D@dsic.upv.es> <9387344953c3b7f9327f558e36e4c066@polytechnique.edu> <1B435522-BF15-406F-B10E-66014B4D9A56@dsic.upv.es> Message-ID: Ah thank you very much! It works perfectly now. I will keep in mind that rule from now on. Best regards, Perceval, > MatDestroy(&B) should not be called, for the same reason as the PC - it is owned by the PC object, it just gives you a pointer. This is stated in the manpage of PCFactorGetMatrix: > > Notes: > Does not increase the reference count for the matrix so DO NOT destroy it > > As a general rule, objects returned by functions XXXGetYYY() should not be destroyed, while objects returned by functions XXXCreateYYY() should. > > Jose > > El 24 mar 2020, a las 18:26, Perceval Desforges escribi?: > > Thank you but I am still confused, I still get the same error message when calling KSPDestroy(). My code looks something like this: > > ierr = MatShift(M,-E); > ierr = KSPCreate(PETSC_COMM_WORLD,&ksp); > ierr = KSPSetOperators(ksp,M,M); > ierr = KSPSetType(ksp,KSPPREONLY); > ierr = KSPGetPC(ksp,&pc); > ierr = PCSetType(pc,PCCHOLESKY); > ierr = KSPSetUp(ksp); > ierr = PCFactorGetMatrix(pc,&B); > ierr = MatGetInertia(B,&nneg,&nzero,&npos); > ierr = PetscPrintf(PETSC_COMM_WORLD,"nneg: %D nzero: %D npos: %D \n",nneg,nzero,npos); > ierr = MatDestroy(&B); > ierr = MatDestroy(&M); > ierr = KSPDestroy(&ksp); > > where I've defined the matrix M beforehand. What am I doing wrong? > > Thanks again, > > Best regards, > > Perceval, > > If you obtained the PC object with KSPGetPC() then you do not have to destroy it, only the KSP object. KSPGetPC() only gives a pointer to the internal object, which is managed by the KSP. > Jose > > El 24 mar 2020, a las 17:18, Perceval Desforges escribi?: > > Thank you very much, this seems to work well. > > I have another question linked to this which is probably a bit basic so I apologize. > > At the end of my program, I want to destroy all the petsc objects. I start by destroying the matrixes, then the ksp and pc objects similarly to how it's done in the examples. However I get an error when attempting to destroy the KSP and PC objects : > > [0]PETSC ERROR: Invalid argument > > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > > I've tried switching the order around, but I still get the same errors. > > And if I don't destroy these objects I get a memory leak, which I'd like to avoid. > > My question is I don't really understand when and how I'm supposed to destroy the KSP and PC objects? > > Thanks again, > > Best regards, > > Perceval, > > You can do this directly in PETSc. Create a KSP object with PREONLY and PCCHOLESKY (or just a PC object). Then call KSPSetUp (or PCSetUp) and extract the factored matrix with PCFactorGetMatrix(). Then call MatGetInertia() on the factored matrix. Repeat this for each value of E. > > I guess it can be even shorter if you call MatCholeskyFactor() directly. > > Jose > > El 24 mar 2020, a las 11:07, Perceval Desforges escribi?: > > Dear petsc developers, > > I am interested in calculating the inertias of matrixes. Specifically, for a certain matrix A, and for different real numbers E, I want to calculate the inertias of (A - E * I), in order to get the number of eigenvalues less than E. > > In order to do this I have been setting up a slepc EPS object with spectrum slicing, and using EPSKrylovSchurGetInertias. I realize this is a bit convoluted, and was wondering if there is a better way to do this? > > Best regards, > > Perceval, > > P.S. my last email seems to not have been sent (I couldn't find it in the archives) so I am trying again... -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.M.Aragon at tudelft.nl Wed Mar 25 11:13:35 2020 From: A.M.Aragon at tudelft.nl (Alejandro Aragon - 3ME) Date: Wed, 25 Mar 2020 16:13:35 +0000 Subject: [petsc-users] [petsc4py] Assembly fails Message-ID: Dear everyone, I?m new to petsc4py and I?m trying to run a simple finite element code that uses DMPLEX to load a .msh file (created by Gmsh). In version 3.10 the code was working but I recently upgraded to 3.12 and I get the following error: (.pydev) ? testmodule git:(e0bc9ae) ? mpirun -np 2 python testmodule/__main__.py {3: } {3: } Traceback (most recent call last): File "testmodule/__main__.py", line 32, in sys.exit(main(sys.argv)) File "testmodule/__main__.py", line 29, in main step.solve(m) File "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line 33, in solve self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin petsc4py.PETSc.Error: error code 63 [1] MatAssemblyBegin() line 5182 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c [1] MatAssemblyBegin_MPIAIJ() line 810 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c [1] MatStashScatterBegin_Private() line 462 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [1] MatStashScatterBegin_BTS() line 931 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [1] PetscCommBuildTwoSidedFReq() line 555 in /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c [1] Argument out of range [1] toranks[0] 2 not in comm size 2 Traceback (most recent call last): File "testmodule/__main__.py", line 32, in sys.exit(main(sys.argv)) File "testmodule/__main__.py", line 29, in main step.solve(m) File "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line 33, in solve self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin petsc4py.PETSc.Error: error code 63 [0] MatAssemblyBegin() line 5182 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c [0] MatAssemblyBegin_MPIAIJ() line 810 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c [0] MatStashScatterBegin_Private() line 462 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [0] MatStashScatterBegin_BTS() line 931 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [0] PetscCommBuildTwoSidedFReq() line 555 in /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c [0] Argument out of range [0] toranks[0] 2 not in comm size 2 ------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted. ------------------------------------------------------- -------------------------------------------------------------------------- mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[46994,1],0] Exit code: 1 -------------------------------------------------------------------------- This is in the call to assembly, which looks like this: # Begins assembling the matrix. This routine should be called after completing all calls to MatSetValues(). self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 # Completes assembling the matrix. This routine should be called after MatAssemblyBegin(). self.Amat.assemblyEnd(assembly=0) I would appreciate if someone can give me some insight on what has changed in the new version of petsc4py (or petsc for that matter) to make this code work again. Best regards, ? Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 25 11:37:15 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Mar 2020 12:37:15 -0400 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: On Wed, Mar 25, 2020 at 12:29 PM Alejandro Aragon - 3ME < A.M.Aragon at tudelft.nl> wrote: > Dear everyone, > > I?m new to petsc4py and I?m trying to run a simple finite element code > that uses DMPLEX to load a .msh file (created by Gmsh). In version 3.10 the > code was working but I recently upgraded to 3.12 and I get the following > error: > > (.pydev) ? testmodule git:(e0bc9ae) ? mpirun -np 2 python > testmodule/__main__.py > {3: } > {3: } > Traceback (most recent call last): > File "testmodule/__main__.py", line 32, in > sys.exit(main(sys.argv)) > File "testmodule/__main__.py", line 29, in main > step.solve(m) > File > "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line > 33, in solve > self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 > File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin > petsc4py.PETSc.Error: error code 63 > [1] MatAssemblyBegin() line 5182 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c > [1] MatAssemblyBegin_MPIAIJ() line 810 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c > [1] MatStashScatterBegin_Private() line 462 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [1] MatStashScatterBegin_BTS() line 931 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [1] PetscCommBuildTwoSidedFReq() line 555 in > /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c > [1] Argument out of range > [1] toranks[0] 2 not in comm size 2 > Traceback (most recent call last): > File "testmodule/__main__.py", line 32, in > sys.exit(main(sys.argv)) > File "testmodule/__main__.py", line 29, in main > step.solve(m) > File > "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line > 33, in solve > self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 > File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin > petsc4py.PETSc.Error: error code 63 > [0] MatAssemblyBegin() line 5182 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c > [0] MatAssemblyBegin_MPIAIJ() line 810 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c > [0] MatStashScatterBegin_Private() line 462 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [0] MatStashScatterBegin_BTS() line 931 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [0] PetscCommBuildTwoSidedFReq() line 555 in > /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c > [0] Argument out of range > [0] toranks[0] 2 not in comm size 2 > ------------------------------------------------------- > Primary job terminated normally, but 1 process returned > a non-zero exit code.. Per user-direction, the job has been aborted. > ------------------------------------------------------- > -------------------------------------------------------------------------- > mpirun detected that one or more processes exited with non-zero status, > thus causing > the job to be terminated. The first process to do so was: > > Process name: [[46994,1],0] > Exit code: 1 > -------------------------------------------------------------------------- > > > This is in the call to assembly, which looks like this: > > # Begins assembling the matrix. This routine should be called after completing all calls to MatSetValues(). > self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 > # Completes assembling the matrix. This routine should be called after MatAssemblyBegin(). > self.Amat.assemblyEnd(assembly=0) > > I would appreciate if someone can give me some insight on what has changed > in the new version of petsc4py (or petsc for that matter) to make this code > work again. > It looks like you have an inconsistent build, or a memory overwrite. Since you are in Python, I suspect the former. Can you build PETSc from scratch and try this? Does it work in serial? Can you send a small code that reproduces this? Thanks, Matt > Best regards, > > ? Alejandro > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aminthefresh at gmail.com Wed Mar 25 11:58:20 2020 From: aminthefresh at gmail.com (Amin Sadeghi) Date: Wed, 25 Mar 2020 12:58:20 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 Message-ID: Hi, I ran KSP example 45 on a single node with 32 cores and 125GB memory using 1, 16 and 32 MPI processes. Here's a comparison of the time spent during KSP.solve: - 1 MPI process: ~98 sec, speedup: 1X - 16 MPI processes: ~12 sec, speedup: ~8X - 32 MPI processes: ~11 sec, speedup: ~9X Since the problem size is large enough (8M unknowns), I expected a speedup much closer to 32X, rather than 9X. Is this expected? If yes, how can it be improved? I've attached three log files for more details. Sincerely, Amin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ncpus_01.log Type: text/x-log Size: 22744 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ncpus_32.log Type: text/x-log Size: 26476 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ncpus_16.log Type: text/x-log Size: 26520 bytes Desc: not available URL: From knepley at gmail.com Wed Mar 25 12:56:55 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Mar 2020 13:56:55 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: On Wed, Mar 25, 2020 at 1:01 PM Amin Sadeghi wrote: > Hi, > > I ran KSP example 45 on a single node with 32 cores and 125GB memory using > 1, 16 and 32 MPI processes. Here's a comparison of the time spent during > KSP.solve: > > - 1 MPI process: ~98 sec, speedup: 1X > - 16 MPI processes: ~12 sec, speedup: ~8X > - 32 MPI processes: ~11 sec, speedup: ~9X > > Since the problem size is large enough (8M unknowns), I expected a speedup > much closer to 32X, rather than 9X. Is this expected? If yes, how can it be > improved? > > I've attached three log files for more details. > We have answered this here: https://www.mcs.anl.gov/petsc/documentation/faq.html#computers However, I can briefly summarize it. The bottleneck here is not computing power, it is memory bandwidth. The node you are running on has enough bandwidth for about 8 processes, not 32. I probably takes 12-16 processes to saturate the memory bandwidth, but not 32. That is why you see no speedup after 16. There is no way to improve this by optimization. The only thing to do is change the algorithm you are using. This behavior has been extensively documented and talked about for two decades. See, for example, the Roofline Performance Model. Thanks, Matt > Sincerely, > Amin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Mar 25 13:04:18 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 25 Mar 2020 14:04:18 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: I would guess that you are saturating the memory bandwidth. After you make PETSc (make all) it will suggest that you test it (make test) and suggest that you run streams (make streams). I see Matt answered but let me add that when you make streams you will seed the memory rate for 1,2,3, ... NP processes. If your machine is decent you should see very good speed up at the beginning and then it will start to saturate. You are seeing about 50% of perfect speedup at 16 process. I would expect that you will see something similar with streams. Without knowing your machine, your results look typical. On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi wrote: > Hi, > > I ran KSP example 45 on a single node with 32 cores and 125GB memory using > 1, 16 and 32 MPI processes. Here's a comparison of the time spent during > KSP.solve: > > - 1 MPI process: ~98 sec, speedup: 1X > - 16 MPI processes: ~12 sec, speedup: ~8X > - 32 MPI processes: ~11 sec, speedup: ~9X > > Since the problem size is large enough (8M unknowns), I expected a speedup > much closer to 32X, rather than 9X. Is this expected? If yes, how can it be > improved? > > I've attached three log files for more details. > > Sincerely, > Amin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aminthefresh at gmail.com Wed Mar 25 13:08:56 2020 From: aminthefresh at gmail.com (Amin Sadeghi) Date: Wed, 25 Mar 2020 14:08:56 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: Thank you Matt and Mark for the explanation. That makes sense. Please correct me if I'm wrong, I think instead of asking for the whole node with 32 cores, if I ask for more nodes, say 4 or 8, but each with 8 cores, then I should see much better speedups. Is that correct? On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: > I would guess that you are saturating the memory bandwidth. After you make > PETSc (make all) it will suggest that you test it (make test) and suggest > that you run streams (make streams). > > I see Matt answered but let me add that when you make streams you will > seed the memory rate for 1,2,3, ... NP processes. If your machine is decent > you should see very good speed up at the beginning and then it will start > to saturate. You are seeing about 50% of perfect speedup at 16 process. I > would expect that you will see something similar with streams. Without > knowing your machine, your results look typical. > > On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi > wrote: > >> Hi, >> >> I ran KSP example 45 on a single node with 32 cores and 125GB memory >> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >> during KSP.solve: >> >> - 1 MPI process: ~98 sec, speedup: 1X >> - 16 MPI processes: ~12 sec, speedup: ~8X >> - 32 MPI processes: ~11 sec, speedup: ~9X >> >> Since the problem size is large enough (8M unknowns), I expected a >> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >> can it be improved? >> >> I've attached three log files for more details. >> >> Sincerely, >> Amin >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 25 13:16:08 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Mar 2020 14:16:08 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: On Wed, Mar 25, 2020 at 2:11 PM Amin Sadeghi wrote: > Thank you Matt and Mark for the explanation. That makes sense. Please > correct me if I'm wrong, I think instead of asking for the whole node with > 32 cores, if I ask for more nodes, say 4 or 8, but each with 8 cores, then > I should see much better speedups. Is that correct? > Yes, exactly Matt > On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: > >> I would guess that you are saturating the memory bandwidth. After >> you make PETSc (make all) it will suggest that you test it (make test) and >> suggest that you run streams (make streams). >> >> I see Matt answered but let me add that when you make streams you will >> seed the memory rate for 1,2,3, ... NP processes. If your machine is decent >> you should see very good speed up at the beginning and then it will start >> to saturate. You are seeing about 50% of perfect speedup at 16 process. I >> would expect that you will see something similar with streams. Without >> knowing your machine, your results look typical. >> >> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi >> wrote: >> >>> Hi, >>> >>> I ran KSP example 45 on a single node with 32 cores and 125GB memory >>> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >>> during KSP.solve: >>> >>> - 1 MPI process: ~98 sec, speedup: 1X >>> - 16 MPI processes: ~12 sec, speedup: ~8X >>> - 32 MPI processes: ~11 sec, speedup: ~9X >>> >>> Since the problem size is large enough (8M unknowns), I expected a >>> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >>> can it be improved? >>> >>> I've attached three log files for more details. >>> >>> Sincerely, >>> Amin >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Mar 25 13:16:51 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 25 Mar 2020 14:16:51 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: Also, a better test is see where streams pretty much saturates, then run that many processors per node and do the same test by increasing the nodes. This will tell you how well your network communication is doing. But this result has a lot of stuff in "network communication" that can be further evaluated. The worst thing about this, I would think, is that the partitioning is blind to the memory hierarchy of inter and intra node communication. The next thing to do is run with an initial grid that puts one cell per node and the do uniform refinement, until you have one cell per process (eg, one refinement step using 8 processes per node), partition to get one cell per process, then do uniform refinement to get a reasonable sized local problem. Alas, this is not easy to do, but it is doable. On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: > I would guess that you are saturating the memory bandwidth. After you make > PETSc (make all) it will suggest that you test it (make test) and suggest > that you run streams (make streams). > > I see Matt answered but let me add that when you make streams you will > seed the memory rate for 1,2,3, ... NP processes. If your machine is decent > you should see very good speed up at the beginning and then it will start > to saturate. You are seeing about 50% of perfect speedup at 16 process. I > would expect that you will see something similar with streams. Without > knowing your machine, your results look typical. > > On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi > wrote: > >> Hi, >> >> I ran KSP example 45 on a single node with 32 cores and 125GB memory >> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >> during KSP.solve: >> >> - 1 MPI process: ~98 sec, speedup: 1X >> - 16 MPI processes: ~12 sec, speedup: ~8X >> - 32 MPI processes: ~11 sec, speedup: ~9X >> >> Since the problem size is large enough (8M unknowns), I expected a >> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >> can it be improved? >> >> I've attached three log files for more details. >> >> Sincerely, >> Amin >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Mar 25 14:51:34 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 25 Mar 2020 14:51:34 -0500 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: I repeated your experiment on one node of TACC Frontera, 1 rank: 85.0s 16 ranks: 8.2s, 10x speedup 32 ranks: 5.7s, 15x speedup --Junchao Zhang On Wed, Mar 25, 2020 at 1:18 PM Mark Adams wrote: > Also, a better test is see where streams pretty much saturates, then run > that many processors per node and do the same test by increasing the nodes. > This will tell you how well your network communication is doing. > > But this result has a lot of stuff in "network communication" that can be > further evaluated. The worst thing about this, I would think, is that the > partitioning is blind to the memory hierarchy of inter and intra node > communication. The next thing to do is run with an initial grid that puts > one cell per node and the do uniform refinement, until you have one cell > per process (eg, one refinement step using 8 processes per node), partition > to get one cell per process, then do uniform refinement to get a > reasonable sized local problem. Alas, this is not easy to do, but it is > doable. > > On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: > >> I would guess that you are saturating the memory bandwidth. After >> you make PETSc (make all) it will suggest that you test it (make test) and >> suggest that you run streams (make streams). >> >> I see Matt answered but let me add that when you make streams you will >> seed the memory rate for 1,2,3, ... NP processes. If your machine is decent >> you should see very good speed up at the beginning and then it will start >> to saturate. You are seeing about 50% of perfect speedup at 16 process. I >> would expect that you will see something similar with streams. Without >> knowing your machine, your results look typical. >> >> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi >> wrote: >> >>> Hi, >>> >>> I ran KSP example 45 on a single node with 32 cores and 125GB memory >>> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >>> during KSP.solve: >>> >>> - 1 MPI process: ~98 sec, speedup: 1X >>> - 16 MPI processes: ~12 sec, speedup: ~8X >>> - 32 MPI processes: ~11 sec, speedup: ~9X >>> >>> Since the problem size is large enough (8M unknowns), I expected a >>> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >>> can it be improved? >>> >>> I've attached three log files for more details. >>> >>> Sincerely, >>> Amin >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From aminthefresh at gmail.com Wed Mar 25 16:40:13 2020 From: aminthefresh at gmail.com (Amin Sadeghi) Date: Wed, 25 Mar 2020 17:40:13 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: Junchao, thank you for doing the experiment, I guess TACC Frontera nodes have higher memory bandwidth (maybe more modern CPU architecture, although I'm not familiar as to which hardware affect memory bandwidth) than Compute Canada's Graham. Mark, I did as you suggested. As you suspected, running make streams yielded the same results, indicating that the memory bandwidth saturated at around 8 MPI processes. I ran the experiment on multiple nodes but only requested 8 cores per node, and here is the result: 1 node (8 cores total): 17.5s, 6X speedup 2 nodes (16 cores total): 13.5s, 7X speedup 3 nodes (24 cores total): 9.4s, 10X speedup 4 nodes (32 cores total): 8.3s, 12X speedup 5 nodes (40 cores total): 7.0s, 14X speedup 6 nodes (48 cores total): 61.4s, 2X speedup [!!!] 7 nodes (56 cores total): 4.3s, 23X speedup 8 nodes (64 cores total): 3.7s, 27X speedup *Note:* as you can see, the experiment with 6 nodes showed extremely poor scaling, which I guess was an outlier, maybe due to some connection problem? I also ran another experiment, requesting 2 full nodes, i.e. 64 cores, and here's the result: 2 nodes (64 cores total): 6.0s, 16X speedup [32 cores each node] So, it turns out that given a fixed number of cores, i.e. 64 in our case, much better speedups (27X vs. 16X in our case) can be achieved if they are distributed among separate nodes. Anyways, I really appreciate all your inputs. *One final question:* From what I understand from Mark's comment, PETSc at the moment is blind to memory hierarchy, is it feasible to make PETSc aware of the inter and intra node communication so that partitioning is done to maximize performance? Or, to put it differently, is this something that PETSc devs have their eyes on for the future? Sincerely, Amin On Wed, Mar 25, 2020 at 3:51 PM Junchao Zhang wrote: > I repeated your experiment on one node of TACC Frontera, > 1 rank: 85.0s > 16 ranks: 8.2s, 10x speedup > 32 ranks: 5.7s, 15x speedup > > --Junchao Zhang > > > On Wed, Mar 25, 2020 at 1:18 PM Mark Adams wrote: > >> Also, a better test is see where streams pretty much saturates, then run >> that many processors per node and do the same test by increasing the nodes. >> This will tell you how well your network communication is doing. >> >> But this result has a lot of stuff in "network communication" that can be >> further evaluated. The worst thing about this, I would think, is that the >> partitioning is blind to the memory hierarchy of inter and intra node >> communication. The next thing to do is run with an initial grid that puts >> one cell per node and the do uniform refinement, until you have one cell >> per process (eg, one refinement step using 8 processes per node), partition >> to get one cell per process, then do uniform refinement to get a >> reasonable sized local problem. Alas, this is not easy to do, but it is >> doable. >> >> On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: >> >>> I would guess that you are saturating the memory bandwidth. After >>> you make PETSc (make all) it will suggest that you test it (make test) and >>> suggest that you run streams (make streams). >>> >>> I see Matt answered but let me add that when you make streams you will >>> seed the memory rate for 1,2,3, ... NP processes. If your machine is decent >>> you should see very good speed up at the beginning and then it will start >>> to saturate. You are seeing about 50% of perfect speedup at 16 process. I >>> would expect that you will see something similar with streams. Without >>> knowing your machine, your results look typical. >>> >>> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi >>> wrote: >>> >>>> Hi, >>>> >>>> I ran KSP example 45 on a single node with 32 cores and 125GB memory >>>> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >>>> during KSP.solve: >>>> >>>> - 1 MPI process: ~98 sec, speedup: 1X >>>> - 16 MPI processes: ~12 sec, speedup: ~8X >>>> - 32 MPI processes: ~11 sec, speedup: ~9X >>>> >>>> Since the problem size is large enough (8M unknowns), I expected a >>>> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >>>> can it be improved? >>>> >>>> I've attached three log files for more details. >>>> >>>> Sincerely, >>>> Amin >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 25 16:55:57 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Mar 2020 17:55:57 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: On Wed, Mar 25, 2020 at 5:41 PM Amin Sadeghi wrote: > Junchao, thank you for doing the experiment, I guess TACC Frontera nodes > have higher memory bandwidth (maybe more modern CPU architecture, although > I'm not familiar as to which hardware affect memory bandwidth) than Compute > Canada's Graham. > > Mark, I did as you suggested. As you suspected, running make streams > yielded the same results, indicating that the memory bandwidth saturated at > around 8 MPI processes. I ran the experiment on multiple nodes but only > requested 8 cores per node, and here is the result: > > 1 node (8 cores total): 17.5s, 6X speedup > 2 nodes (16 cores total): 13.5s, 7X speedup > 3 nodes (24 cores total): 9.4s, 10X speedup > 4 nodes (32 cores total): 8.3s, 12X speedup > 5 nodes (40 cores total): 7.0s, 14X speedup > 6 nodes (48 cores total): 61.4s, 2X speedup [!!!] > 7 nodes (56 cores total): 4.3s, 23X speedup > 8 nodes (64 cores total): 3.7s, 27X speedup > > *Note:* as you can see, the experiment with 6 nodes showed extremely poor > scaling, which I guess was an outlier, maybe due to some connection problem? > > I also ran another experiment, requesting 2 full nodes, i.e. 64 cores, and > here's the result: > > 2 nodes (64 cores total): 6.0s, 16X speedup [32 cores each node] > > So, it turns out that given a fixed number of cores, i.e. 64 in our case, > much better speedups (27X vs. 16X in our case) can be achieved if they are > distributed among separate nodes. > > Anyways, I really appreciate all your inputs. > > *One final question:* From what I understand from Mark's comment, PETSc > at the moment is blind to memory hierarchy, is it feasible to make PETSc > aware of the inter and intra node communication so that partitioning is > done to maximize performance? Or, to put it differently, is this something > that PETSc devs have their eyes on for the future? > There is already stuff in VecScatter that knows about the memory hierarchy, which Junchao put in. We are actively working on some other node-aware algorithms. Thanks, Matt > Sincerely, > Amin > > > On Wed, Mar 25, 2020 at 3:51 PM Junchao Zhang > wrote: > >> I repeated your experiment on one node of TACC Frontera, >> 1 rank: 85.0s >> 16 ranks: 8.2s, 10x speedup >> 32 ranks: 5.7s, 15x speedup >> >> --Junchao Zhang >> >> >> On Wed, Mar 25, 2020 at 1:18 PM Mark Adams wrote: >> >>> Also, a better test is see where streams pretty much saturates, then run >>> that many processors per node and do the same test by increasing the nodes. >>> This will tell you how well your network communication is doing. >>> >>> But this result has a lot of stuff in "network communication" that can >>> be further evaluated. The worst thing about this, I would think, is that >>> the partitioning is blind to the memory hierarchy of inter and intra node >>> communication. The next thing to do is run with an initial grid that puts >>> one cell per node and the do uniform refinement, until you have one cell >>> per process (eg, one refinement step using 8 processes per node), partition >>> to get one cell per process, then do uniform refinement to get a >>> reasonable sized local problem. Alas, this is not easy to do, but it is >>> doable. >>> >>> On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: >>> >>>> I would guess that you are saturating the memory bandwidth. After >>>> you make PETSc (make all) it will suggest that you test it (make test) and >>>> suggest that you run streams (make streams). >>>> >>>> I see Matt answered but let me add that when you make streams you will >>>> seed the memory rate for 1,2,3, ... NP processes. If your machine is decent >>>> you should see very good speed up at the beginning and then it will start >>>> to saturate. You are seeing about 50% of perfect speedup at 16 process. I >>>> would expect that you will see something similar with streams. Without >>>> knowing your machine, your results look typical. >>>> >>>> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I ran KSP example 45 on a single node with 32 cores and 125GB memory >>>>> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >>>>> during KSP.solve: >>>>> >>>>> - 1 MPI process: ~98 sec, speedup: 1X >>>>> - 16 MPI processes: ~12 sec, speedup: ~8X >>>>> - 32 MPI processes: ~11 sec, speedup: ~9X >>>>> >>>>> Since the problem size is large enough (8M unknowns), I expected a >>>>> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >>>>> can it be improved? >>>>> >>>>> I've attached three log files for more details. >>>>> >>>>> Sincerely, >>>>> Amin >>>>> >>>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aminthefresh at gmail.com Wed Mar 25 17:04:03 2020 From: aminthefresh at gmail.com (Amin Sadeghi) Date: Wed, 25 Mar 2020 18:04:03 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: That's great. Thanks for creating this great piece of software! Amin On Wed, Mar 25, 2020 at 5:56 PM Matthew Knepley wrote: > On Wed, Mar 25, 2020 at 5:41 PM Amin Sadeghi > wrote: > >> Junchao, thank you for doing the experiment, I guess TACC Frontera nodes >> have higher memory bandwidth (maybe more modern CPU architecture, although >> I'm not familiar as to which hardware affect memory bandwidth) than Compute >> Canada's Graham. >> >> Mark, I did as you suggested. As you suspected, running make streams >> yielded the same results, indicating that the memory bandwidth saturated at >> around 8 MPI processes. I ran the experiment on multiple nodes but only >> requested 8 cores per node, and here is the result: >> >> 1 node (8 cores total): 17.5s, 6X speedup >> 2 nodes (16 cores total): 13.5s, 7X speedup >> 3 nodes (24 cores total): 9.4s, 10X speedup >> 4 nodes (32 cores total): 8.3s, 12X speedup >> 5 nodes (40 cores total): 7.0s, 14X speedup >> 6 nodes (48 cores total): 61.4s, 2X speedup [!!!] >> 7 nodes (56 cores total): 4.3s, 23X speedup >> 8 nodes (64 cores total): 3.7s, 27X speedup >> >> *Note:* as you can see, the experiment with 6 nodes showed extremely >> poor scaling, which I guess was an outlier, maybe due to some connection >> problem? >> >> I also ran another experiment, requesting 2 full nodes, i.e. 64 cores, >> and here's the result: >> >> 2 nodes (64 cores total): 6.0s, 16X speedup [32 cores each node] >> >> So, it turns out that given a fixed number of cores, i.e. 64 in our case, >> much better speedups (27X vs. 16X in our case) can be achieved if they are >> distributed among separate nodes. >> >> Anyways, I really appreciate all your inputs. >> >> *One final question:* From what I understand from Mark's comment, PETSc >> at the moment is blind to memory hierarchy, is it feasible to make PETSc >> aware of the inter and intra node communication so that partitioning is >> done to maximize performance? Or, to put it differently, is this something >> that PETSc devs have their eyes on for the future? >> > > There is already stuff in VecScatter that knows about the memory > hierarchy, which Junchao put in. We are actively working on some other > node-aware algorithms. > > Thanks, > > Matt > > >> Sincerely, >> Amin >> >> >> On Wed, Mar 25, 2020 at 3:51 PM Junchao Zhang >> wrote: >> >>> I repeated your experiment on one node of TACC Frontera, >>> 1 rank: 85.0s >>> 16 ranks: 8.2s, 10x speedup >>> 32 ranks: 5.7s, 15x speedup >>> >>> --Junchao Zhang >>> >>> >>> On Wed, Mar 25, 2020 at 1:18 PM Mark Adams wrote: >>> >>>> Also, a better test is see where streams pretty much saturates, then >>>> run that many processors per node and do the same test by increasing the >>>> nodes. This will tell you how well your network communication is doing. >>>> >>>> But this result has a lot of stuff in "network communication" that can >>>> be further evaluated. The worst thing about this, I would think, is that >>>> the partitioning is blind to the memory hierarchy of inter and intra node >>>> communication. The next thing to do is run with an initial grid that puts >>>> one cell per node and the do uniform refinement, until you have one cell >>>> per process (eg, one refinement step using 8 processes per node), partition >>>> to get one cell per process, then do uniform refinement to get a >>>> reasonable sized local problem. Alas, this is not easy to do, but it is >>>> doable. >>>> >>>> On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: >>>> >>>>> I would guess that you are saturating the memory bandwidth. After >>>>> you make PETSc (make all) it will suggest that you test it (make test) and >>>>> suggest that you run streams (make streams). >>>>> >>>>> I see Matt answered but let me add that when you make streams you will >>>>> seed the memory rate for 1,2,3, ... NP processes. If your machine is decent >>>>> you should see very good speed up at the beginning and then it will start >>>>> to saturate. You are seeing about 50% of perfect speedup at 16 process. I >>>>> would expect that you will see something similar with streams. Without >>>>> knowing your machine, your results look typical. >>>>> >>>>> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I ran KSP example 45 on a single node with 32 cores and 125GB memory >>>>>> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >>>>>> during KSP.solve: >>>>>> >>>>>> - 1 MPI process: ~98 sec, speedup: 1X >>>>>> - 16 MPI processes: ~12 sec, speedup: ~8X >>>>>> - 32 MPI processes: ~11 sec, speedup: ~9X >>>>>> >>>>>> Since the problem size is large enough (8M unknowns), I expected a >>>>>> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >>>>>> can it be improved? >>>>>> >>>>>> I've attached three log files for more details. >>>>>> >>>>>> Sincerely, >>>>>> Amin >>>>>> >>>>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Wed Mar 25 17:39:52 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 25 Mar 2020 16:39:52 -0600 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: On Wed, Mar 25, 2020 at 12:18 PM Mark Adams wrote: > Also, a better test is see where streams pretty much saturates, then run > that many processors per node and do the same test by increasing the nodes. > This will tell you how well your network communication is doing. > > But this result has a lot of stuff in "network communication" that can be > further evaluated. The worst thing about this, I would think, is that the > partitioning is blind to the memory hierarchy of inter and intra node > communication. > Hierarchical partitioning was designed for this purpose. https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MATPARTITIONINGHIERARCH.html#MATPARTITIONINGHIERARCH Fande, > The next thing to do is run with an initial grid that puts one cell per > node and the do uniform refinement, until you have one cell per process > (eg, one refinement step using 8 processes per node), partition to get one > cell per process, then do uniform refinement to get a reasonable sized > local problem. Alas, this is not easy to do, but it is doable. > > On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: > >> I would guess that you are saturating the memory bandwidth. After >> you make PETSc (make all) it will suggest that you test it (make test) and >> suggest that you run streams (make streams). >> >> I see Matt answered but let me add that when you make streams you will >> seed the memory rate for 1,2,3, ... NP processes. If your machine is decent >> you should see very good speed up at the beginning and then it will start >> to saturate. You are seeing about 50% of perfect speedup at 16 process. I >> would expect that you will see something similar with streams. Without >> knowing your machine, your results look typical. >> >> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi >> wrote: >> >>> Hi, >>> >>> I ran KSP example 45 on a single node with 32 cores and 125GB memory >>> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >>> during KSP.solve: >>> >>> - 1 MPI process: ~98 sec, speedup: 1X >>> - 16 MPI processes: ~12 sec, speedup: ~8X >>> - 32 MPI processes: ~11 sec, speedup: ~9X >>> >>> Since the problem size is large enough (8M unknowns), I expected a >>> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >>> can it be improved? >>> >>> I've attached three log files for more details. >>> >>> Sincerely, >>> Amin >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Wed Mar 25 18:05:54 2020 From: jczhang at mcs.anl.gov (Zhang, Junchao) Date: Wed, 25 Mar 2020 23:05:54 +0000 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: <361FA3FF-816C-429A-9737-866117EA0392@anl.gov> MPI rank distribution (e.g., 8 ranks per node or 16 ranks per node) is usually managed by workload managers like Slurm, PBS through your job scripts, which is out of petsc?s control. From: Amin Sadeghi Date: Wednesday, March 25, 2020 at 4:40 PM To: Junchao Zhang Cc: Mark Adams , PETSc users list Subject: Re: [petsc-users] Poor speed up for KSP example 45 Junchao, thank you for doing the experiment, I guess TACC Frontera nodes have higher memory bandwidth (maybe more modern CPU architecture, although I'm not familiar as to which hardware affect memory bandwidth) than Compute Canada's Graham. Mark, I did as you suggested. As you suspected, running make streams yielded the same results, indicating that the memory bandwidth saturated at around 8 MPI processes. I ran the experiment on multiple nodes but only requested 8 cores per node, and here is the result: 1 node (8 cores total): 17.5s, 6X speedup 2 nodes (16 cores total): 13.5s, 7X speedup 3 nodes (24 cores total): 9.4s, 10X speedup 4 nodes (32 cores total): 8.3s, 12X speedup 5 nodes (40 cores total): 7.0s, 14X speedup 6 nodes (48 cores total): 61.4s, 2X speedup [!!!] 7 nodes (56 cores total): 4.3s, 23X speedup 8 nodes (64 cores total): 3.7s, 27X speedup Note: as you can see, the experiment with 6 nodes showed extremely poor scaling, which I guess was an outlier, maybe due to some connection problem? I also ran another experiment, requesting 2 full nodes, i.e. 64 cores, and here's the result: 2 nodes (64 cores total): 6.0s, 16X speedup [32 cores each node] So, it turns out that given a fixed number of cores, i.e. 64 in our case, much better speedups (27X vs. 16X in our case) can be achieved if they are distributed among separate nodes. Anyways, I really appreciate all your inputs. One final question: From what I understand from Mark's comment, PETSc at the moment is blind to memory hierarchy, is it feasible to make PETSc aware of the inter and intra node communication so that partitioning is done to maximize performance? Or, to put it differently, is this something that PETSc devs have their eyes on for the future? Sincerely, Amin On Wed, Mar 25, 2020 at 3:51 PM Junchao Zhang > wrote: I repeated your experiment on one node of TACC Frontera, 1 rank: 85.0s 16 ranks: 8.2s, 10x speedup 32 ranks: 5.7s, 15x speedup --Junchao Zhang On Wed, Mar 25, 2020 at 1:18 PM Mark Adams > wrote: Also, a better test is see where streams pretty much saturates, then run that many processors per node and do the same test by increasing the nodes. This will tell you how well your network communication is doing. But this result has a lot of stuff in "network communication" that can be further evaluated. The worst thing about this, I would think, is that the partitioning is blind to the memory hierarchy of inter and intra node communication. The next thing to do is run with an initial grid that puts one cell per node and the do uniform refinement, until you have one cell per process (eg, one refinement step using 8 processes per node), partition to get one cell per process, then do uniform refinement to get a reasonable sized local problem. Alas, this is not easy to do, but it is doable. On Wed, Mar 25, 2020 at 2:04 PM Mark Adams > wrote: I would guess that you are saturating the memory bandwidth. After you make PETSc (make all) it will suggest that you test it (make test) and suggest that you run streams (make streams). I see Matt answered but let me add that when you make streams you will seed the memory rate for 1,2,3, ... NP processes. If your machine is decent you should see very good speed up at the beginning and then it will start to saturate. You are seeing about 50% of perfect speedup at 16 process. I would expect that you will see something similar with streams. Without knowing your machine, your results look typical. On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi > wrote: Hi, I ran KSP example 45 on a single node with 32 cores and 125GB memory using 1, 16 and 32 MPI processes. Here's a comparison of the time spent during KSP.solve: - 1 MPI process: ~98 sec, speedup: 1X - 16 MPI processes: ~12 sec, speedup: ~8X - 32 MPI processes: ~11 sec, speedup: ~9X Since the problem size is large enough (8M unknowns), I expected a speedup much closer to 32X, rather than 9X. Is this expected? If yes, how can it be improved? I've attached three log files for more details. Sincerely, Amin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Mar 25 18:18:32 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 25 Mar 2020 19:18:32 -0400 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: On Wed, Mar 25, 2020 at 6:40 PM Fande Kong wrote: > > > On Wed, Mar 25, 2020 at 12:18 PM Mark Adams wrote: > >> Also, a better test is see where streams pretty much saturates, then run >> that many processors per node and do the same test by increasing the nodes. >> This will tell you how well your network communication is doing. >> >> But this result has a lot of stuff in "network communication" that can be >> further evaluated. The worst thing about this, I would think, is that the >> partitioning is blind to the memory hierarchy of inter and intra node >> communication. >> > > Hierarchical partitioning was designed for this purpose. > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MATPARTITIONINGHIERARCH.html#MATPARTITIONINGHIERARCH > > That's fantastic! > Fande, > > >> The next thing to do is run with an initial grid that puts one cell per >> node and the do uniform refinement, until you have one cell per process >> (eg, one refinement step using 8 processes per node), partition to get one >> cell per process, then do uniform refinement to get a reasonable sized >> local problem. Alas, this is not easy to do, but it is doable. >> >> On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: >> >>> I would guess that you are saturating the memory bandwidth. After >>> you make PETSc (make all) it will suggest that you test it (make test) and >>> suggest that you run streams (make streams). >>> >>> I see Matt answered but let me add that when you make streams you will >>> seed the memory rate for 1,2,3, ... NP processes. If your machine is decent >>> you should see very good speed up at the beginning and then it will start >>> to saturate. You are seeing about 50% of perfect speedup at 16 process. I >>> would expect that you will see something similar with streams. Without >>> knowing your machine, your results look typical. >>> >>> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi >>> wrote: >>> >>>> Hi, >>>> >>>> I ran KSP example 45 on a single node with 32 cores and 125GB memory >>>> using 1, 16 and 32 MPI processes. Here's a comparison of the time spent >>>> during KSP.solve: >>>> >>>> - 1 MPI process: ~98 sec, speedup: 1X >>>> - 16 MPI processes: ~12 sec, speedup: ~8X >>>> - 32 MPI processes: ~11 sec, speedup: ~9X >>>> >>>> Since the problem size is large enough (8M unknowns), I expected a >>>> speedup much closer to 32X, rather than 9X. Is this expected? If yes, how >>>> can it be improved? >>>> >>>> I've attached three log files for more details. >>>> >>>> Sincerely, >>>> Amin >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Wed Mar 25 20:18:06 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 25 Mar 2020 19:18:06 -0600 Subject: [petsc-users] Poor speed up for KSP example 45 In-Reply-To: References: Message-ID: <5C39D86F-3E3E-4F00-9717-38327E0377E3@gmail.com> In case someone wants to learn more about the hierarchical partitioning algorithm. Here is a reference https://arxiv.org/pdf/1809.02666.pdf Thanks Fande > On Mar 25, 2020, at 5:18 PM, Mark Adams wrote: > > ? > > >> On Wed, Mar 25, 2020 at 6:40 PM Fande Kong wrote: >>> >>> >>>> On Wed, Mar 25, 2020 at 12:18 PM Mark Adams wrote: >>>> Also, a better test is see where streams pretty much saturates, then run that many processors per node and do the same test by increasing the nodes. This will tell you how well your network communication is doing. >>>> >>>> But this result has a lot of stuff in "network communication" that can be further evaluated. The worst thing about this, I would think, is that the partitioning is blind to the memory hierarchy of inter and intra node communication. >>> >>> Hierarchical partitioning was designed for this purpose. https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MATPARTITIONINGHIERARCH.html#MATPARTITIONINGHIERARCH >>> >> >> That's fantastic! >> >> Fande, >> >>> The next thing to do is run with an initial grid that puts one cell per node and the do uniform refinement, until you have one cell per process (eg, one refinement step using 8 processes per node), partition to get one cell per process, then do uniform refinement to get a reasonable sized local problem. Alas, this is not easy to do, but it is doable. >>> >>>> On Wed, Mar 25, 2020 at 2:04 PM Mark Adams wrote: >>>> I would guess that you are saturating the memory bandwidth. After you make PETSc (make all) it will suggest that you test it (make test) and suggest that you run streams (make streams). >>>> >>>> I see Matt answered but let me add that when you make streams you will seed the memory rate for 1,2,3, ... NP processes. If your machine is decent you should see very good speed up at the beginning and then it will start to saturate. You are seeing about 50% of perfect speedup at 16 process. I would expect that you will see something similar with streams. Without knowing your machine, your results look typical. >>>> >>>>> On Wed, Mar 25, 2020 at 1:05 PM Amin Sadeghi wrote: >>>>> Hi, >>>>> >>>>> I ran KSP example 45 on a single node with 32 cores and 125GB memory using 1, 16 and 32 MPI processes. Here's a comparison of the time spent during KSP.solve: >>>>> >>>>> - 1 MPI process: ~98 sec, speedup: 1X >>>>> - 16 MPI processes: ~12 sec, speedup: ~8X >>>>> - 32 MPI processes: ~11 sec, speedup: ~9X >>>>> >>>>> Since the problem size is large enough (8M unknowns), I expected a speedup much closer to 32X, rather than 9X. Is this expected? If yes, how can it be improved? >>>>> >>>>> I've attached three log files for more details. >>>>> >>>>> Sincerely, >>>>> Amin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Thu Mar 26 06:00:28 2020 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 26 Mar 2020 14:00:28 +0300 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: On Wed, 25 Mar 2020 at 19:38, Matthew Knepley wrote: > > It looks like you have an inconsistent build, or a memory overwrite. Since > you are in Python, I suspect the former. Can you build > PETSc from scratch and try this? Does it work in serial? Can you send a > small code that reproduces this? > > Well, I think Alejandro installed PETSc/petsc4py via pip as per my instructions. Can you confirm, Alejandro? -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.M.Aragon at tudelft.nl Thu Mar 26 06:07:09 2020 From: A.M.Aragon at tudelft.nl (Alejandro Aragon - 3ME) Date: Thu, 26 Mar 2020 11:07:09 +0000 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: <7F95CBAA-F9AB-4D4E-9D5C-7FCA9C07FD31@tudelft.nl> Hi Lisandro, Yes, that is the case, I ended up installing following your instructions since it was the simplest way to do it. I was actually trying to install PETSc 3.11 and then to install petsc4py using pip and providing environmental variables for PETSC_DIR and PETSC_ARCH, but I couldn?t make it work. Later Lisandro pointed out I could not use Python 3.8 for this. Best, ? Alejandro On 26 Mar 2020, at 12:00, Lisandro Dalcin > wrote: On Wed, 25 Mar 2020 at 19:38, Matthew Knepley > wrote: It looks like you have an inconsistent build, or a memory overwrite. Since you are in Python, I suspect the former. Can you build PETSc from scratch and try this? Does it work in serial? Can you send a small code that reproduces this? Well, I think Alejandro installed PETSc/petsc4py via pip as per my instructions. Can you confirm, Alejandro? -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Thu Mar 26 08:34:37 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 26 Mar 2020 08:34:37 -0500 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: <7F95CBAA-F9AB-4D4E-9D5C-7FCA9C07FD31@tudelft.nl> References: <7F95CBAA-F9AB-4D4E-9D5C-7FCA9C07FD31@tudelft.nl> Message-ID: Do you provide a test example for the failure? --Junchao Zhang On Thu, Mar 26, 2020 at 6:07 AM Alejandro Aragon - 3ME < A.M.Aragon at tudelft.nl> wrote: > Hi Lisandro, > > Yes, that is the case, I ended up installing following your instructions > since it was the simplest way to do it. I was actually trying to install > PETSc 3.11 and then to install petsc4py using pip and providing > environmental variables for PETSC_DIR and PETSC_ARCH, but I couldn?t make > it work. Later Lisandro pointed out I could not use Python 3.8 for this. > > Best, > > ? Alejandro > > On 26 Mar 2020, at 12:00, Lisandro Dalcin wrote: > > > > On Wed, 25 Mar 2020 at 19:38, Matthew Knepley wrote: > >> >> It looks like you have an inconsistent build, or a memory overwrite. >> Since you are in Python, I suspect the former. Can you build >> PETSc from scratch and try this? Does it work in serial? Can you send a >> small code that reproduces this? >> >> > Well, I think Alejandro installed PETSc/petsc4py via pip as per my > instructions. Can you confirm, Alejandro? > > > > -- > Lisandro Dalcin > ============ > Research Scientist > Extreme Computing Research Center (ECRC) > King Abdullah University of Science and Technology (KAUST) > http://ecrc.kaust.edu.sa/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Thu Mar 26 16:14:06 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Thu, 26 Mar 2020 22:14:06 +0100 Subject: [petsc-users] node DG with DMPlex In-Reply-To: References: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> <0526eb34-b4ce-19c4-4f76-81d2cd41cd45@univ-amu.fr> Message-ID: Hi matt Le 3/23/2020 ? 2:24 PM, Matthew Knepley a ?crit?: > On Wed, Mar 18, 2020 at 12:58 PM Yann Jobic > wrote: > > Hi matt, > > Le 3/17/2020 ? 4:00 PM, Matthew Knepley a ?crit?: > > On Mon, Mar 16, 2020 at 5:20 PM Yann Jobic > > > >> > wrote: > > > >? ? ?Hi all, > > > >? ? ?I would like to implement a nodal DG with the DMPlex interface. > >? ? ?Therefore, i must add the internal nodes to the DM (GLL > nodes), with > >? ? ?the > >? ? ?constrains : > >? ? ?1) Add them as solution points, with correct coordinates (and > keep the > >? ? ?good rotational ordering) > >? ? ?2) Find the shared nodes at faces in order to compute the fluxes > >? ? ?3) For parallel use, so synchronize the ghost node at each > time steps > > > > > > Let me get the fundamentals straight before advising, since I > have never > > implemented nodal DG. > > > >? ? 1) What is shared? > I need to duplicate an edge in 2D, or a facet in 3D, and to sync it > after a time step, in order to compute the numerical fluxes > (Lax-Friedrichs at the beginning). > > > I should have been more specific, but I think I see what you want. You > do not "share" unknowns between cells, > so all unknowns should be associated with some cell in the Section. > > You think of some cell unknowns as being "connected" to a face, so when > you want to calculate a flux, you need > the unknowns from the adjacent cell in order to do it. In order to do > this, I would partition with overlap=1, which > is what we do for finite volume, which has the same adjacency needs. You > might also set > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetAdjacency.html > to PETSC_TRUE, PETSC_FALSE, but you are probably doing everything > matrix-free if you are using DG. > The above is optimal for FV, but not for DG because you communicate more > than you absolutely have to. > > A more complicated, but optimal, thing to do would be to assign interior > dofs to the cell, and two sets of dofs > to each face, one for each cell. Then you only communicate the face > dofs. Its just more bookkeeping for you, > but it will work in parallel just fine. I'm going this way. So i should use dm/impls/plex/examples/tutorials/ex1.c.html as reference. I should define the internal nodes on cells : I define the section with 3 fields (field 0 on cells, field 1 and 2 on faces), as : numComp[0] = Nr; /* Total number of dof per Cell */ numDof[0*(dim+1)+0] = dim; /* defined over the Cell */ And on the same section, the dofs at faces : numComp[1] = NumDofPerFace; numComp[2] = NumDofPerFace; numDof[1*(dim+1)+dim-1] = dim-1; /* internal dof of the cell */ numDof[2*(dim+1)+dim-1] = dim-1; /* external dof of the cell */ Is it a good way to create my section ? Thus, the data is duplicated for the faces, that means that i have to sync the internal Face dof at faces with their corresponding values from the internal one (at cells). Here field 1 is synchronised with field 0, locally. But the external Face dof, field 2, have to be synchronised with the values of the adjacent cell. Is it possible to use something like DMPlexGetFaceFields ? Is there an example of such use of PetscSection and synchronisation process ? For the parallel part, should i use PetscSF object ? I read your article "Mesh Algorithms for PDE with Sieve I: Mesh Distribution". But it's refereeing to Matthew G. Knepley and Dmitry A. Karpeev. Sieve implementation. Technical Report ANL/MCS to appear, Argonne National Laboratory, January 2008. I couldn't find it. It is freely available ? > > I don't think you need extra vertices, > or coordinates, and for output I > recommend using DMPlexProject() to get > the solution in some space that can be plotted like P1, or anything else > supported by your visualization. I would like to use DMplex as much as i can, as i would in the future refine locally the mesh. I hope you're good in this difficult situation (covid19), Best regards, Yann > > ? Thanks, > > ? ? ?Matt > > > > >? ? ? ? We have an implementation of spectral element ordering > > > (https://gitlab.com/petsc/petsc/-/blob/master/src/dm/impls/plex/examples/tutorials/ex6.c). > > > Those share > >? ? ? ? the whole element boundary. > > > >? ? 2) What ghosts do you need? > In order to compute the numerical fluxes of one element, i need the > values of the surrounding nodes connected to the adjacent elements. > > > >? ? 3) You want to store real space coordinates for a quadrature? > It should be basically the same as PetscFE of higher order. > I add some vertex needed to compute a polynomal solution of the desired > order. That means that if i have a N, order of the local approximation, > i need 0.5*(N+1)*(N+2) vertex to store in the DMPlex (in 2D), in > order to : > 1) have the correct number of dof > 2) use ghost nodes to sync the values of the vertex/edge/facet for > 1D/2D/3D problem > 2) save correctly the solution > > Does it make sense to you ? > > Maybe like > https://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ex11.c.html > With the use of the function SplitFaces, which i didn't fully > understood > so far. > > Thanks, > > Yann > > > > >? ? ? ? We usually define a quadrature on the reference element once. > > > >? ? Thanks, > > > >? ? ? Matt > > > >? ? ?I found elements of answers in those threads : > > > https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html > > > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html > > > >? ? ?However, it's not clear for me where to begin. > > > >? ? ?Quoting Matt, i should : > >? ? ?"? DMGetCoordinateDM(dm, &cdm); > >? ? ? ? ? > >? ? ? ? DMCreateLocalVector(cdm, &coordinatesLocal); > >? ? ? ? > >? ? ? ? DMSetCoordinatesLocal(dm, coordinatesLocal);" > > > >? ? ?However, i will not create ghost nodes this way. And i'm not > sure to > >? ? ?keep the good ordering. > >? ? ?This part should be implemented in the PetscFE interface, for > high > >? ? ?order > >? ? ?discrete solutions. > >? ? ?I did not succeed in finding the correct part of the source > doing it. > > > >? ? ?Could you please give me some hint to begin correctly thoses > tasks ? > > > >? ? ?Thanks, > > > >? ? ?Yann > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From knepley at gmail.com Thu Mar 26 17:38:33 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 26 Mar 2020 18:38:33 -0400 Subject: [petsc-users] node DG with DMPlex In-Reply-To: References: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> <0526eb34-b4ce-19c4-4f76-81d2cd41cd45@univ-amu.fr> Message-ID: On Thu, Mar 26, 2020 at 5:14 PM Yann Jobic wrote: > Hi matt > > Le 3/23/2020 ? 2:24 PM, Matthew Knepley a ?crit : > > On Wed, Mar 18, 2020 at 12:58 PM Yann Jobic > > wrote: > > > > Hi matt, > > > > Le 3/17/2020 ? 4:00 PM, Matthew Knepley a ?crit : > > > On Mon, Mar 16, 2020 at 5:20 PM Yann Jobic > > > > > >> > > wrote: > > > > > > Hi all, > > > > > > I would like to implement a nodal DG with the DMPlex > interface. > > > Therefore, i must add the internal nodes to the DM (GLL > > nodes), with > > > the > > > constrains : > > > 1) Add them as solution points, with correct coordinates (and > > keep the > > > good rotational ordering) > > > 2) Find the shared nodes at faces in order to compute the > fluxes > > > 3) For parallel use, so synchronize the ghost node at each > > time steps > > > > > > > > > Let me get the fundamentals straight before advising, since I > > have never > > > implemented nodal DG. > > > > > > 1) What is shared? > > I need to duplicate an edge in 2D, or a facet in 3D, and to sync it > > after a time step, in order to compute the numerical fluxes > > (Lax-Friedrichs at the beginning). > > > > > > I should have been more specific, but I think I see what you want. You > > do not "share" unknowns between cells, > > so all unknowns should be associated with some cell in the Section. > > > > You think of some cell unknowns as being "connected" to a face, so when > > you want to calculate a flux, you need > > the unknowns from the adjacent cell in order to do it. In order to do > > this, I would partition with overlap=1, which > > is what we do for finite volume, which has the same adjacency needs. You > > might also set > > > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetAdjacency.html > > to PETSC_TRUE, PETSC_FALSE, but you are probably doing everything > > matrix-free if you are using DG. > > The above is optimal for FV, but not for DG because you communicate more > > than you absolutely have to. > > > > A more complicated, but optimal, thing to do would be to assign interior > > dofs to the cell, and two sets of dofs > > to each face, one for each cell. Then you only communicate the face > > dofs. Its just more bookkeeping for you, > > but it will work in parallel just fine. > I'm going this way. > So i should use dm/impls/plex/examples/tutorials/ex1.c.html as reference. > I should define the internal nodes on cells : > I define the section with 3 fields (field 0 on cells, field 1 and 2 on > faces), as : > numComp[0] = Nr; /* Total number of dof per Cell */ > numDof[0*(dim+1)+0] = dim; /* defined over the Cell */ > And on the same section, the dofs at faces : > numComp[1] = NumDofPerFace; > numComp[2] = NumDofPerFace; > numDof[1*(dim+1)+dim-1] = dim-1; /* internal dof of the cell */ > numDof[2*(dim+1)+dim-1] = dim-1; /* external dof of the cell */ > > Is it a good way to create my section ? > I would put them all in one field. A "field" is supposed to be a physical thing, like velocity or pressure. > Thus, the data is duplicated for the faces, that means that i have to > sync the internal Face dof at faces with their corresponding values from > the internal one (at cells). > That is not how I was envisioning it. Perhaps a drawing. Suppose you had DG2, then you have 17 18 7-----8-----9-----14----15 | | | 16,3 4 5,6 11 12,13 | | | 1-----2-----3-----9-----10 19 20 so each face gets 2 dofs, one for each cell.When doing a cell integral, you only use the dof that is for that cell. The local-to-global would update the face dofs, so you would get each side. There is a reordering when you extract the closure. I have written one for spectral elements. We would need another here that ordered all the "other" face dofs to the end. This seems a little complicated to me. Do you know how Andreas Klockner does it in Hedge? Or Tim Warburton? I just want to make sure I am not missing an elegant way to handle this. > Here field 1 is synchronised with field 0, locally. > But the external Face dof, field 2, have to be synchronised with the > values of the adjacent cell. > Is it possible to use something like DMPlexGetFaceFields ? > Is there an example of such use of PetscSection and synchronisation > process ? > > For the parallel part, should i use PetscSF object ? > In parallel, integrals would be summed into the global vector, so each side has a 0 for the other face dof and the right contribution for its face dof. Then both sides get both solution dofs. It seems to work in my head. > I read your article "Mesh Algorithms for PDE with Sieve I: Mesh > Distribution". But it's refereeing to Matthew G. Knepley and Dmitry A. > Karpeev. Sieve implementation. > Technical Report ANL/MCS to appear, Argonne National Laboratory, > January 2008. > I couldn't find it. It is freely available ? > Don't bother reading that. There are later ones: There are two pretty good sources: https://arxiv.org/abs/1505.04633 https://arxiv.org/abs/1506.06194 The last one is a follow-on to this paper https://arxiv.org/abs/0908.4427 Thanks, Matt > > > > I don't think you need extra vertices, > or coordinates, and for output I > > recommend using DMPlexProject() to get > > the solution in some space that can be plotted like P1, or anything else > > supported by your visualization. > > I would like to use DMplex as much as i can, as i would in the future > refine locally the mesh. > > I hope you're good in this difficult situation (covid19), > > Best regards, > > Yann > > > > > Thanks, > > > > Matt > > > > > > > > We have an implementation of spectral element ordering > > > > > ( > https://gitlab.com/petsc/petsc/-/blob/master/src/dm/impls/plex/examples/tutorials/ex6.c > ). > > > > > Those share > > > the whole element boundary. > > > > > > 2) What ghosts do you need? > > In order to compute the numerical fluxes of one element, i need the > > values of the surrounding nodes connected to the adjacent elements. > > > > > > 3) You want to store real space coordinates for a quadrature? > > It should be basically the same as PetscFE of higher order. > > I add some vertex needed to compute a polynomal solution of the > desired > > order. That means that if i have a N, order of the local > approximation, > > i need 0.5*(N+1)*(N+2) vertex to store in the DMPlex (in 2D), in > > order to : > > 1) have the correct number of dof > > 2) use ghost nodes to sync the values of the vertex/edge/facet for > > 1D/2D/3D problem > > 2) save correctly the solution > > > > Does it make sense to you ? > > > > Maybe like > > > https://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ex11.c.html > > With the use of the function SplitFaces, which i didn't fully > > understood > > so far. > > > > Thanks, > > > > Yann > > > > > > > > We usually define a quadrature on the reference element > once. > > > > > > Thanks, > > > > > > Matt > > > > > > I found elements of answers in those threads : > > > > > > https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html > > > > > > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html > > > > > > However, it's not clear for me where to begin. > > > > > > Quoting Matt, i should : > > > " DMGetCoordinateDM(dm, &cdm); > > > > > > DMCreateLocalVector(cdm, &coordinatesLocal); > > > > > > DMSetCoordinatesLocal(dm, coordinatesLocal);" > > > > > > However, i will not create ghost nodes this way. And i'm not > > sure to > > > keep the good ordering. > > > This part should be implemented in the PetscFE interface, for > > high > > > order > > > discrete solutions. > > > I did not succeed in finding the correct part of the source > > doing it. > > > > > > Could you please give me some hint to begin correctly thoses > > tasks ? > > > > > > Thanks, > > > > > > Yann > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > > > experiments is infinitely more interesting than any results to > which > > > their experiments lead. > > > -- Norbert Wiener > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ < > http://www.cse.buffalo.edu/~knepley/> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.M.Aragon at tudelft.nl Fri Mar 27 02:31:19 2020 From: A.M.Aragon at tudelft.nl (Alejandro Aragon - 3ME) Date: Fri, 27 Mar 2020 07:31:19 +0000 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: Dear Matthew, Thanks for your email. I have attached the python code that reproduces the following error in my computer: (.pydev) ? dmplex_fem mpirun -np 2 python Cpp2Python.py Traceback (most recent call last): File "Cpp2Python.py", line 383, in sys.exit(Cpp2Python()) File "Cpp2Python.py", line 357, in Cpp2Python dm = createfields(dm) File "Cpp2Python.py", line 62, in createfields section.setFieldName(0, "u") File "PETSc/Section.pyx", line 59, in petsc4py.PETSc.Section.setFieldName petsc4py.PETSc.Error: error code 63 [1] PetscSectionSetFieldName() line 427 in /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c [1] Argument out of range [1] Section field 0 should be in [0, 0) Traceback (most recent call last): File "Cpp2Python.py", line 383, in sys.exit(Cpp2Python()) File "Cpp2Python.py", line 357, in Cpp2Python dm = createfields(dm) File "Cpp2Python.py", line 62, in createfields section.setFieldName(0, "u") File "PETSc/Section.pyx", line 59, in petsc4py.PETSc.Section.setFieldName petsc4py.PETSc.Error: error code 63 [0] PetscSectionSetFieldName() line 427 in /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c [0] Argument out of range [0] Section field 0 should be in [0, 0) ------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted. ------------------------------------------------------- -------------------------------------------------------------------------- mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[23972,1],0] Exit code: 1 -------------------------------------------------------------------------- I?m using Python 3.8 and this is the output of ?pip freeze' (.pydev) ? dmplex_fem pip freeze cachetools==4.0.0 cycler==0.10.0 kiwisolver==1.1.0 llvmlite==0.31.0 matplotlib==3.2.1 mpi4py==3.0.3 numba==0.48.0 numpy==1.18.2 petsc==3.12.4 petsc4py==3.12.0 plexus==0.1.0 pyparsing==2.4.6 python-dateutil==2.8.1 scipy==1.4.1 six==1.14.0 I?m looking forward to getting your insight on the issue. Best regards, ? Alejandro On 25 Mar 2020, at 17:37, Matthew Knepley > wrote: On Wed, Mar 25, 2020 at 12:29 PM Alejandro Aragon - 3ME > wrote: Dear everyone, I?m new to petsc4py and I?m trying to run a simple finite element code that uses DMPLEX to load a .msh file (created by Gmsh). In version 3.10 the code was working but I recently upgraded to 3.12 and I get the following error: (.pydev) ? testmodule git:(e0bc9ae) ? mpirun -np 2 python testmodule/__main__.py {3: } {3: } Traceback (most recent call last): File "testmodule/__main__.py", line 32, in sys.exit(main(sys.argv)) File "testmodule/__main__.py", line 29, in main step.solve(m) File "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line 33, in solve self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin petsc4py.PETSc.Error: error code 63 [1] MatAssemblyBegin() line 5182 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c [1] MatAssemblyBegin_MPIAIJ() line 810 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c [1] MatStashScatterBegin_Private() line 462 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [1] MatStashScatterBegin_BTS() line 931 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [1] PetscCommBuildTwoSidedFReq() line 555 in /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c [1] Argument out of range [1] toranks[0] 2 not in comm size 2 Traceback (most recent call last): File "testmodule/__main__.py", line 32, in sys.exit(main(sys.argv)) File "testmodule/__main__.py", line 29, in main step.solve(m) File "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line 33, in solve self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin petsc4py.PETSc.Error: error code 63 [0] MatAssemblyBegin() line 5182 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c [0] MatAssemblyBegin_MPIAIJ() line 810 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c [0] MatStashScatterBegin_Private() line 462 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [0] MatStashScatterBegin_BTS() line 931 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [0] PetscCommBuildTwoSidedFReq() line 555 in /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c [0] Argument out of range [0] toranks[0] 2 not in comm size 2 ------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted. ------------------------------------------------------- -------------------------------------------------------------------------- mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[46994,1],0] Exit code: 1 -------------------------------------------------------------------------- This is in the call to assembly, which looks like this: # Begins assembling the matrix. This routine should be called after completing all calls to MatSetValues(). self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 # Completes assembling the matrix. This routine should be called after MatAssemblyBegin(). self.Amat.assemblyEnd(assembly=0) I would appreciate if someone can give me some insight on what has changed in the new version of petsc4py (or petsc for that matter) to make this code work again. It looks like you have an inconsistent build, or a memory overwrite. Since you are in Python, I suspect the former. Can you build PETSc from scratch and try this? Does it work in serial? Can you send a small code that reproduces this? Thanks, Matt Best regards, ? Alejandro -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: d1_0.msh Type: application/octet-stream Size: 572913 bytes Desc: d1_0.msh URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Cpp2Python.py Type: text/x-python-script Size: 16596 bytes Desc: Cpp2Python.py URL: From knepley at gmail.com Fri Mar 27 09:09:11 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 27 Mar 2020 10:09:11 -0400 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: On Fri, Mar 27, 2020 at 3:31 AM Alejandro Aragon - 3ME < A.M.Aragon at tudelft.nl> wrote: > Dear Matthew, > > Thanks for your email. I have attached the python code that reproduces the > following error in my computer: > I think I see the problem. There were changes in DM in order to support fields which only occupy part of the domain. Now you need to tell the DM about the fields before it builds a Section. I think in your code, you only need f = PetscContainer() f.setName("potential") dm.addField(field = f) from https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMAddField.html before the createSection(). My Python may not be correct since I never use that interface. Thanks, Matt > (.pydev) ? dmplex_fem mpirun -np 2 python Cpp2Python.py > Traceback (most recent call last): > File "Cpp2Python.py", line 383, in > sys.exit(Cpp2Python()) > File "Cpp2Python.py", line 357, in Cpp2Python > dm = createfields(dm) > File "Cpp2Python.py", line 62, in createfields > section.setFieldName(0, "u") > File "PETSc/Section.pyx", line 59, in petsc4py.PETSc.Section.setFieldName > petsc4py.PETSc.Error: error code 63 > [1] PetscSectionSetFieldName() line 427 in > /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c > [1] Argument out of range > [1] Section field 0 should be in [0, 0) > Traceback (most recent call last): > File "Cpp2Python.py", line 383, in > sys.exit(Cpp2Python()) > File "Cpp2Python.py", line 357, in Cpp2Python > dm = createfields(dm) > File "Cpp2Python.py", line 62, in createfields > section.setFieldName(0, "u") > File "PETSc/Section.pyx", line 59, in petsc4py.PETSc.Section.setFieldName > petsc4py.PETSc.Error: error code 63 > [0] PetscSectionSetFieldName() line 427 in > /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c > [0] Argument out of range > [0] Section field 0 should be in [0, 0) > ------------------------------------------------------- > Primary job terminated normally, but 1 process returned > a non-zero exit code.. Per user-direction, the job has been aborted. > ------------------------------------------------------- > -------------------------------------------------------------------------- > mpirun detected that one or more processes exited with non-zero status, > thus causing > the job to be terminated. The first process to do so was: > > Process name: [[23972,1],0] > Exit code: 1 > -------------------------------------------------------------------------- > > I?m using Python 3.8 and this is the output of ?pip freeze' > > (.pydev) ? dmplex_fem pip freeze > cachetools==4.0.0 > cycler==0.10.0 > kiwisolver==1.1.0 > llvmlite==0.31.0 > matplotlib==3.2.1 > mpi4py==3.0.3 > numba==0.48.0 > numpy==1.18.2 > petsc==3.12.4 > petsc4py==3.12.0 > plexus==0.1.0 > pyparsing==2.4.6 > python-dateutil==2.8.1 > scipy==1.4.1 > six==1.14.0 > > I?m looking forward to getting your insight on the issue. > Best regards, > > ? Alejandro > > > > On 25 Mar 2020, at 17:37, Matthew Knepley wrote: > > On Wed, Mar 25, 2020 at 12:29 PM Alejandro Aragon - 3ME < > A.M.Aragon at tudelft.nl> wrote: > > Dear everyone, > > I?m new to petsc4py and I?m trying to run a simple finite element code > that uses DMPLEX to load a .msh file (created by Gmsh). In version 3.10 the > code was working but I recently upgraded to 3.12 and I get the following > error: > > (.pydev) ? testmodule git:(e0bc9ae) ? mpirun -np 2 python > testmodule/__main__.py > {3: } > {3: } > Traceback (most recent call last): > File "testmodule/__main__.py", line 32, in > sys.exit(main(sys.argv)) > File "testmodule/__main__.py", line 29, in main > step.solve(m) > File > "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line > 33, in solve > self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 > File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin > petsc4py.PETSc.Error: error code 63 > [1] MatAssemblyBegin() line 5182 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c > [1] MatAssemblyBegin_MPIAIJ() line 810 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c > [1] MatStashScatterBegin_Private() line 462 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [1] MatStashScatterBegin_BTS() line 931 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [1] PetscCommBuildTwoSidedFReq() line 555 in > /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c > [1] Argument out of range > [1] toranks[0] 2 not in comm size 2 > Traceback (most recent call last): > File "testmodule/__main__.py", line 32, in > sys.exit(main(sys.argv)) > File "testmodule/__main__.py", line 29, in main > step.solve(m) > File > "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line > 33, in solve > self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 > File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin > petsc4py.PETSc.Error: error code 63 > [0] MatAssemblyBegin() line 5182 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c > [0] MatAssemblyBegin_MPIAIJ() line 810 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c > [0] MatStashScatterBegin_Private() line 462 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [0] MatStashScatterBegin_BTS() line 931 in > /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c > [0] PetscCommBuildTwoSidedFReq() line 555 in > /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c > [0] Argument out of range > [0] toranks[0] 2 not in comm size 2 > ------------------------------------------------------- > Primary job terminated normally, but 1 process returned > a non-zero exit code.. Per user-direction, the job has been aborted. > ------------------------------------------------------- > -------------------------------------------------------------------------- > mpirun detected that one or more processes exited with non-zero status, > thus causing > the job to be terminated. The first process to do so was: > > Process name: [[46994,1],0] > Exit code: 1 > -------------------------------------------------------------------------- > > > This is in the call to assembly, which looks like this: > > # Begins assembling the matrix. This routine should be called after completing all calls to MatSetValues(). > self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 > # Completes assembling the matrix. This routine should be called after MatAssemblyBegin(). > self.Amat.assemblyEnd(assembly=0) > > I would appreciate if someone can give me some insight on what has changed > in the new version of petsc4py (or petsc for that matter) to make this code > work again. > > > It looks like you have an inconsistent build, or a memory overwrite. Since > you are in Python, I suspect the former. Can you build > PETSc from scratch and try this? Does it work in serial? Can you send a > small code that reproduces this? > > Thanks, > > Matt > > > Best regards, > > ? Alejandro > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Fri Mar 27 12:39:53 2020 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Fri, 27 Mar 2020 20:39:53 +0300 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: On Fri, 27 Mar 2020 at 17:10, Matthew Knepley wrote: > On Fri, Mar 27, 2020 at 3:31 AM Alejandro Aragon - 3ME < > A.M.Aragon at tudelft.nl> wrote: > >> Dear Matthew, >> >> Thanks for your email. I have attached the python code that reproduces >> the following error in my computer: >> > > I think I see the problem. There were changes in DM in order to support > fields which only occupy part of the domain. > Now you need to tell the DM about the fields before it builds a Section. I > think in your code, you only need > > f = PetscContainer() > f.setName("potential") > dm.addField(field = f) > > Except that petsc4py do not expose a Container class :-( @Alejandro, PETSc 3.13 is about to be released. Once that is done, we have a small windows to make a few enhancements in petsc4py before releasing the matching petsc4py-3.13 that you can use to upgrade your code. Would that work for you? -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Fri Mar 27 13:27:50 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Fri, 27 Mar 2020 19:27:50 +0100 Subject: [petsc-users] node DG with DMPlex In-Reply-To: References: <7885d022-cc56-8053-2b30-784ff47f0d0f@univ-amu.fr> <0526eb34-b4ce-19c4-4f76-81d2cd41cd45@univ-amu.fr> Message-ID: <6a1f6b17-8f7d-3180-88c9-35568511110b@univ-amu.fr> Hi matt, Many thanks for the help !! Le 3/26/2020 ? 11:38 PM, Matthew Knepley a ?crit?: > > ? ? ? ? 17? ? ? ? ? ? ? 18 > ? 7-----8-----9-----14----15 > ? ?|? ? ? ? ? ? ? ?|? ? ? ? ? ? ? ? | > 16,3? ? 4? ? 5,6? ? 11? ?12,13 > ? ?|? ? ? ? ? ? ? ?|? ? ? ? ? ? ? ? | > ? ?1-----2-----3-----9-----10 > ? ? ? ? ? 19? ? ? ? ? ? 20 > > so each face gets 2 dofs, one for each cell.When doing a cell integral, > you only use the dof that is for that cell. I think i understood. I put the beginning of the code for dof management in attachment. i defined : numDof[0*(user.dim+1)+user.dim] = internalDof; /* internalDof defined on cells */ numDof[0*(user.dim+1)+user.dim-1] = nbDofFaceEle; /* nbDofFaceEle defined on faces */ This way, the dof faces are not duplicated. They are defined on the face. (this part is commented in the source file) Am i right ? > The local-to-global would update the face dofs, so you would get each side. > This way, a local-to-global will update the face dofs ? > There is a reordering when you extract the closure. I have written one > for spectral elements. We would need > another here that ordered all the "other" face dofs to the end. You mean using PetscSectionSetPermutation ? Or rewrite DMPlexSetClosurePermutationTensor in order to put all the face dofs at the end ? I don't understand this part. I'm looking at the documentation for DMPlexSetClosurePermutationTensor. https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/plex/plex.c.html#DMPlexSetClosurePermutationTensor It says that ---------------------- The closure in BFS ordering works through height strata (cells, edges, vertices) to produce the ordering .vb 0 1 2 3 8 9 14 15 11 10 13 12 4 5 7 6 .ve -------------------------- which is what we want no ? > > This seems a little complicated to me. Do you know how Andreas Klockner > does it in Hedge? Or Tim Warburton? I'm reading his book (nodal discontinuous galerkin methods) > I just want to make sure I am not missing an elegant way to handle this. > > Here field 1 is synchronised with field 0, locally. > But the external Face dof, field 2, have to be synchronised with the > values of the adjacent cell. > Is it possible to use something like? DMPlexGetFaceFields ? > Is there an example of such use of PetscSection and synchronisation > process ? > > For the parallel part, should i use PetscSF object ? > > > In parallel, integrals would be summed into the global vector, so each > side has a 0 for the other face dof and the right contribution > for its face dof. Then both sides get both solution dofs. It seems to > work in my head. > > I read your article "Mesh Algorithms for PDE with Sieve I: Mesh > Distribution". But it's refereeing to Matthew G. Knepley and Dmitry A. > Karpeev. Sieve implementation. > Technical Report ANL/MCS to appear, Argonne National Laboratory, > January 2008. > I couldn't find it. It is freely available ? > > > Don't bother reading that. There are later ones: > > ?There are two pretty good sources: > > https://arxiv.org/abs/1505.04633 > https://arxiv.org/abs/1506.06194 > > The last one is a follow-on to this paper > > https://arxiv.org/abs/0908.4427 > > ? Thanks, > > ? ? ?Matt > > > > > I don't think you need extra vertices, > or coordinates, and for > output I > > recommend using DMPlexProject() to get > > the solution in some space that can be plotted like P1, or > anything else > > supported by your visualization. > > I would like to use DMplex as much as i can, as i would in the future > refine locally the mesh. > > I hope you're good in this difficult situation (covid19), > > Best regards, > > Yann > > > > >? ? Thanks, > > > >? ? ? ?Matt > > > >? ? ? > > >? ? ? >? ? ? ? We have an implementation of spectral element ordering > >? ? ? > > > > ?(https://gitlab.com/petsc/petsc/-/blob/master/src/dm/impls/plex/examples/tutorials/ex6.c). > > > >? ? ? > Those share > >? ? ? >? ? ? ? the whole element boundary. > >? ? ? > > >? ? ? >? ? 2) What ghosts do you need? > >? ? ?In order to compute the numerical fluxes of one element, i > need the > >? ? ?values of the surrounding nodes connected to the adjacent > elements. > >? ? ? > > >? ? ? >? ? 3) You want to store real space coordinates for a > quadrature? > >? ? ?It should be basically the same as PetscFE of higher order. > >? ? ?I add some vertex needed to compute a polynomal solution of > the desired > >? ? ?order. That means that if i have a N, order of the local > approximation, > >? ? ?i need 0.5*(N+1)*(N+2) vertex to store in the DMPlex (in 2D), in > >? ? ?order to : > >? ? ?1) have the correct number of dof > >? ? ?2) use ghost nodes to sync the values of the > vertex/edge/facet for > >? ? ?1D/2D/3D problem > >? ? ?2) save correctly the solution > > > >? ? ?Does it make sense to you ? > > > >? ? ?Maybe like > > > https://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ex11.c.html > >? ? ?With the use of the function SplitFaces, which i didn't fully > >? ? ?understood > >? ? ?so far. > > > >? ? ?Thanks, > > > >? ? ?Yann > > > >? ? ? > > >? ? ? >? ? ? ? We usually define a quadrature on the reference > element once. > >? ? ? > > >? ? ? >? ? Thanks, > >? ? ? > > >? ? ? >? ? ? Matt > >? ? ? > > >? ? ? >? ? ?I found elements of answers in those threads : > >? ? ? > > > > https://lists.mcs.anl.gov/pipermail/petsc-users/2016-August/029985.html > >? ? ? > > > > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2019-October/039581.html > >? ? ? > > >? ? ? >? ? ?However, it's not clear for me where to begin. > >? ? ? > > >? ? ? >? ? ?Quoting Matt, i should : > >? ? ? >? ? ?"? DMGetCoordinateDM(dm, &cdm); > >? ? ? >? ? ? ? ? > >? ? ? >? ? ? ? DMCreateLocalVector(cdm, &coordinatesLocal); > >? ? ? >? ? ? ? > >? ? ? >? ? ? ? DMSetCoordinatesLocal(dm, coordinatesLocal);" > >? ? ? > > >? ? ? >? ? ?However, i will not create ghost nodes this way. And > i'm not > >? ? ?sure to > >? ? ? >? ? ?keep the good ordering. > >? ? ? >? ? ?This part should be implemented in the PetscFE > interface, for > >? ? ?high > >? ? ? >? ? ?order > >? ? ? >? ? ?discrete solutions. > >? ? ? >? ? ?I did not succeed in finding the correct part of the > source > >? ? ?doing it. > >? ? ? > > >? ? ? >? ? ?Could you please give me some hint to begin correctly > thoses > >? ? ?tasks ? > >? ? ? > > >? ? ? >? ? ?Thanks, > >? ? ? > > >? ? ? >? ? ?Yann > >? ? ? > > >? ? ? > > >? ? ? > > >? ? ? > -- > >? ? ? > What most experimenters take for granted before they begin > their > >? ? ? > experiments is infinitely more interesting than any > results to which > >? ? ? > their experiments lead. > >? ? ? > -- Norbert Wiener > >? ? ? > > >? ? ? > https://www.cse.buffalo.edu/~knepley/ > >? ? ? > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- static char help[] = "dof management with dmplex for DG problems, for simplices \n\n"; #include typedef struct { PetscInt dim; /* Topological problem dimension */ PetscInt Nf; /* Number of fields */ PetscInt Nc[1]; /* Number of components per field */ PetscInt N; /* Order of polymomials used for approximation */ } AppCtx; static PetscErrorCode ProcessOptions(MPI_Comm comm, AppCtx *options) { PetscBool flg; PetscErrorCode ierr; PetscFunctionBeginUser; options->dim = 2; options->Nf = 1; options->Nc[0] = 1; options->N = 4; ierr = PetscOptionsBegin(comm, "", "dof management with dmplex for DG problems (simplices) Options", "DMPLEX");CHKERRQ(ierr); ierr = PetscOptionsRangeInt("-dim", "Problem dimension", "dof_dg.c", options->dim, &options->dim, NULL,1,3);CHKERRQ(ierr); ierr = PetscOptionsInt("-order", "Order of polymomials used for approximation", "dof_dg.c", options->N, &options->N, &flg);CHKERRQ(ierr); ierr = PetscOptionsEnd(); PetscFunctionReturn(0); } int main(int argc, char **argv) { DM dm; PetscSection s; Vec u; PetscViewer viewer; AppCtx user; PetscInt cells[3] = {2, 2, 2}; PetscReal lower[3] = {-1,-1,-1}; PetscReal upper[3] = {1,1,1}; /* DG specification of the PetscSection */ /* order of poly approx is stored in AppCtx user : user.N */ PetscInt dofPerEle=0; /* Number of dof per cell */ PetscInt internalDof; /* dof defined on the internal cell */ /* so without the faces dof */ PetscInt nbFace=0; /* Number of faces per cell */ PetscInt nbDofFaceEle=0; /* Number of dof per face */ PetscErrorCode ierr; ierr = PetscInitialize(&argc, &argv, NULL, help); if (ierr) return ierr; ierr = ProcessOptions(PETSC_COMM_WORLD, &user);CHKERRQ(ierr); ierr = DMPlexCreateBoxMesh(PETSC_COMM_WORLD, user.dim, PETSC_FALSE, cells, lower, upper, NULL, PETSC_TRUE, &dm);CHKERRQ(ierr); ierr = DMSetFromOptions(dm);CHKERRQ(ierr); ierr = DMViewFromOptions(dm, NULL, "-dm_view");CHKERRQ(ierr); /* Total number of dof per element, number of faces per element and number of dof per face */ if (user.dim == 2) { dofPerEle = (user.N+1)*(user.N+2)/2; nbFace = 3; nbDofFaceEle = user.N + 1; internalDof = dofPerEle - nbFace*nbDofFaceEle + 3; } else if (user.dim == 3) { dofPerEle = (user.N+1)*(user.N+2)*(user.N+3)/6; nbFace = 4; nbDofFaceEle = (user.N+1)*(user.N+2)/2; internalDof = dofPerEle - nbFace*nbDofFaceEle + (user.N + 1)*6 - 4; } PetscPrintf(MPI_COMM_WORLD,"\ndof per cell : %d\n",dofPerEle); PetscPrintf(MPI_COMM_WORLD,"dof per face : %d\n",nbDofFaceEle); PetscPrintf(MPI_COMM_WORLD,"internal dof : %d\n",internalDof); { PetscInt *numDof, d; /* numDof[f*(dim+1)+d] gives the number of dof for field f on */ /* points of dimension d. For instance, numDof[1] is the number */ /* of dof for field 0 on each edge. */ ierr = PetscMalloc1(user.Nf*(user.dim+1), &numDof);CHKERRQ(ierr); /* fields 0, written generally to add easily more fields */ for (d = 0; d < user.Nf*(user.dim+1); ++d) numDof[d] = 0; /* we put only the interior dof for not duplicating the face nodes */ numDof[0*(user.dim+1)+user.dim] = internalDof; /* internalDof defined on cells */ numDof[0*(user.dim+1)+user.dim-1] = nbDofFaceEle; /* nbDofFaceEle defined on faces */ ierr = DMSetNumFields(dm, user.Nf);CHKERRQ(ierr); ierr = DMPlexCreateSection(dm, NULL, user.Nc, numDof, 0, NULL, NULL, NULL, NULL, &s);CHKERRQ(ierr); ierr = PetscFree(numDof);CHKERRQ(ierr); } /* Name the Field variable */ ierr = PetscSectionSetFieldName(s, 0, "u");CHKERRQ(ierr); /* Tell the DM to use this data layout */ ierr = DMSetLocalSection(dm, s);CHKERRQ(ierr); /* Create a Vec with this layout and view it */ ierr = DMGetGlobalVector(dm, &u);CHKERRQ(ierr); ierr = PetscViewerCreate(PETSC_COMM_WORLD, &viewer);CHKERRQ(ierr); ierr = PetscViewerSetType(viewer, PETSCVIEWERVTK);CHKERRQ(ierr); ierr = PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK);CHKERRQ(ierr); ierr = PetscViewerFileSetName(viewer, "sol.vtk");CHKERRQ(ierr); ierr = VecView(u, viewer);CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); ierr = DMRestoreGlobalVector(dm, &u);CHKERRQ(ierr); /* Cleanup */ ierr = PetscSectionDestroy(&s);CHKERRQ(ierr); ierr = DMDestroy(&dm);CHKERRQ(ierr); ierr = PetscFinalize(); return ierr; } From yyang85 at stanford.edu Sat Mar 28 04:17:18 2020 From: yyang85 at stanford.edu (Yuyun Yang) Date: Sat, 28 Mar 2020 09:17:18 +0000 Subject: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix Message-ID: Hello team, If I use KSPSolve for Ax=b using Matshell (a user-defined stencil equivalent of A) without preconditioning (since the standard PC would need information of the matrix so cannot apply it to matrix-free method), is it possible, or even inevitable to encounter significant slowdown compared to using a fully assembled matrix with a suitable preconditioner? I'm running a small example and it's already taking a long time for Matshell to converge. Thanks for your help, Yuyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sat Mar 28 08:54:29 2020 From: jed at jedbrown.org (Jed Brown) Date: Sat, 28 Mar 2020 07:54:29 -0600 Subject: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix In-Reply-To: References: Message-ID: <877dz4y48a.fsf@jedbrown.org> If the number of iterations is (significantly) different, then you'd have to debug why your code doesn't implement the same linear operator. You can use MatComputeOperator or -ksp_view_mat_explicit to help reveal differences. If the number of iterations is the same but your code is slower, then you'd have to optimize the performance of your code. Yuyun Yang writes: > Hello team, > > If I use KSPSolve for Ax=b using Matshell (a user-defined stencil equivalent of A) without preconditioning (since the standard PC would need information of the matrix so cannot apply it to matrix-free method), is it possible, or even inevitable to encounter significant slowdown compared to using a fully assembled matrix with a suitable preconditioner? I'm running a small example and it's already taking a long time for Matshell to converge. > > Thanks for your help, > Yuyun From nicolas.barral at math.u-bordeaux.fr Sat Mar 28 09:04:22 2020 From: nicolas.barral at math.u-bordeaux.fr (Nicolas Barral) Date: Sat, 28 Mar 2020 15:04:22 +0100 Subject: [petsc-users] petsc4py/Plex createSection Message-ID: <92f8b5e8-36a1-041f-a6e9-8bb1bdfc009b@math.u-bordeaux.fr> All, I hope you're all safe and fine. My question may be due to me being rusty with dmplex, please forgive me if I missed something obvious... In my Firedrake code, I had the following line for a 2D mesh > section = createSection([1], [2, 0, 0], perm=mesh._plex_renumbering) where mesh._plex_renumbering was an IS containing the mesh renumbering. This created a nice section mapping the plex to a P1 field. However, with a recently installed Petsc/petsc4py, this does not work anymore and results in a section pointing all points of the plex to 0. I investigated a bit (although I don't understand the doc for this function), and found a similar construction in petsc4py test/test_dmplex.py in 1D: > DIM = 1 > CELLS = [[0, 1], [1, 2]] > COORDS = [[0.], [0.5], [1.]] > COMP = 1 > DOFS = [1, 0] > > plex = PETSc.DMPlex().createFromCellList(DIM, CELLS, COORDS) > section = plex.createSection([COMP], [DOFS]) > plex.view() > section.view() This also results in an empty section (which makes the test fail when I run it manually): > PetscSection Object: 1 MPI processes > type not yet set > Process 0: > ( 0) dim 0 offset 0 > ( 1) dim 0 offset 0 > ( 2) dim 0 offset 0 > ( 3) dim 0 offset 0 > ( 4) dim 0 offset 0 So, what did I miss ? Thanks, -- Nicolas From mfadams at lbl.gov Sat Mar 28 09:29:14 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 28 Mar 2020 10:29:14 -0400 Subject: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix In-Reply-To: <877dz4y48a.fsf@jedbrown.org> References: <877dz4y48a.fsf@jedbrown.org> Message-ID: I think Yuyun is saying "....inevitable to encounter significant slowdown compared to using a fully assembled matrix *with a suitable preconditioner* ? " You convergence rate will basically always be better with a preconditioner and unless you are solving the identity (mass matrix) then it will often be significant. Jed is responding to comparing un-preconditioned matrix vs matrix-free and you do want to check that they are identical if you think you have an exact Jacobian of a linear problem. On Sat, Mar 28, 2020 at 9:54 AM Jed Brown wrote: > If the number of iterations is (significantly) different, then you'd > have to debug why your code doesn't implement the same linear operator. > You can use MatComputeOperator or -ksp_view_mat_explicit to help reveal > differences. > > If the number of iterations is the same but your code is slower, then > you'd have to optimize the performance of your code. > > Yuyun Yang writes: > > > Hello team, > > > > If I use KSPSolve for Ax=b using Matshell (a user-defined stencil > equivalent of A) without preconditioning (since the standard PC would need > information of the matrix so cannot apply it to matrix-free method), is it > possible, or even inevitable to encounter significant slowdown compared to > using a fully assembled matrix with a suitable preconditioner? I'm running > a small example and it's already taking a long time for Matshell to > converge. > > > > Thanks for your help, > > Yuyun > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Mar 28 11:15:38 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Mar 2020 12:15:38 -0400 Subject: [petsc-users] petsc4py/Plex createSection In-Reply-To: <92f8b5e8-36a1-041f-a6e9-8bb1bdfc009b@math.u-bordeaux.fr> References: <92f8b5e8-36a1-041f-a6e9-8bb1bdfc009b@math.u-bordeaux.fr> Message-ID: On Sat, Mar 28, 2020 at 10:04 AM Nicolas Barral < nicolas.barral at math.u-bordeaux.fr> wrote: > All, > > I hope you're all safe and fine. > > My question may be due to me being rusty with dmplex, please forgive me > Hi Nicolas! I hope everything is good Bordeaux despite the lockdown. I think I know this one. I put in support for having fields that extend over only part of the mesh, which some users really needed. This means that the DM has a description of fields independent from the DS and Section. Thus we have to ask the DM how many fields it has when creating the Section automatically now. I think you can fix this by first telling the DM that it has one field: field = PETSc.Object() field.setName('potential') dm.addField(field) Thanks, Matt > if I missed something obvious... > In my Firedrake code, I had the following line for a 2D mesh > > section = createSection([1], [2, 0, 0], perm=mesh._plex_renumbering) > where mesh._plex_renumbering was an IS containing the mesh renumbering. > This created a nice section mapping the plex to a P1 field. > > However, with a recently installed Petsc/petsc4py, this does not work > anymore and results in a section pointing all points of the plex to 0. > I investigated a bit (although I don't understand the doc for this > function), and found a similar construction in petsc4py > test/test_dmplex.py in 1D: > > DIM = 1 > > CELLS = [[0, 1], [1, 2]] > > COORDS = [[0.], [0.5], [1.]] > > COMP = 1 > > DOFS = [1, 0] > > > > plex = PETSc.DMPlex().createFromCellList(DIM, CELLS, COORDS) > > section = plex.createSection([COMP], [DOFS]) > > plex.view() > > section.view() > > This also results in an empty section (which makes the test fail when I > run it manually): > > PetscSection Object: 1 MPI processes > > type not yet set > > Process 0: > > ( 0) dim 0 offset 0 > > ( 1) dim 0 offset 0 > > ( 2) dim 0 offset 0 > > ( 3) dim 0 offset 0 > > ( 4) dim 0 offset 0 > > So, what did I miss ? > > Thanks, > > -- > Nicolas > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.barral at math.u-bordeaux.fr Sat Mar 28 12:06:27 2020 From: nicolas.barral at math.u-bordeaux.fr (Nicolas Barral) Date: Sat, 28 Mar 2020 18:06:27 +0100 Subject: [petsc-users] petsc4py/Plex createSection In-Reply-To: References: <92f8b5e8-36a1-041f-a6e9-8bb1bdfc009b@math.u-bordeaux.fr> Message-ID: Thanks Matt! with your explanation I realized it was actually one line above in the test. It seems that you can just specify the number of fields with: self.plex.setNumFields(1) Thanks, -- Nicolas On 28/03/2020 17:15, Matthew Knepley wrote: > On Sat, Mar 28, 2020 at 10:04 AM Nicolas Barral > > wrote: > > All, > > I hope you're all safe and fine. > > My question may be due to me being rusty with dmplex, please forgive me > > > Hi Nicolas! I hope everything is good Bordeaux despite the lockdown. > > I think I know this one. I put in support for having fields that extend > over only part of the mesh, which some > users really needed. This means that the DM has a description of fields > independent from the DS and Section. > Thus we have to ask the DM how many fields it has when creating the > Section automatically now. I think you > can fix this by first telling the DM that it has one field: > > ? field = PETSc.Object() > ? field.setName('potential') > ? dm.addField(field) > ? > > ? Thanks, > > ? ? ?Matt > > if I missed something obvious... > In my Firedrake code, I had the following line for a 2D mesh > > section = createSection([1], [2, 0, 0], perm=mesh._plex_renumbering) > where mesh._plex_renumbering was an IS containing the mesh renumbering. > This created a nice section mapping the plex to a P1 field. > > However, with a recently installed Petsc/petsc4py, this does not work > anymore and results in a section pointing all points of the plex to 0. > I investigated a bit (although I don't understand the doc for this > function), and found a similar construction in petsc4py > test/test_dmplex.py in 1D: > > DIM = 1 > > CELLS = [[0, 1], [1, 2]] > > COORDS = [[0.], [0.5], [1.]] > > COMP = 1 > > DOFS = [1, 0] > > > > plex = PETSc.DMPlex().createFromCellList(DIM, CELLS, COORDS) > > section = plex.createSection([COMP], [DOFS]) > > plex.view() > > section.view() > > This also results in an empty section (which makes the test fail when I > run it manually): > > PetscSection Object: 1 MPI processes > >? ?type not yet set > > Process 0: > >? ?(? ?0) dim? 0 offset? ?0 > >? ?(? ?1) dim? 0 offset? ?0 > >? ?(? ?2) dim? 0 offset? ?0 > >? ?(? ?3) dim? 0 offset? ?0 > >? ?(? ?4) dim? 0 offset? ?0 > > So, what did I miss ? > > Thanks, > > -- > Nicolas > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From knepley at gmail.com Sat Mar 28 12:13:41 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Mar 2020 13:13:41 -0400 Subject: [petsc-users] petsc4py/Plex createSection In-Reply-To: References: <92f8b5e8-36a1-041f-a6e9-8bb1bdfc009b@math.u-bordeaux.fr> Message-ID: On Sat, Mar 28, 2020 at 1:06 PM Nicolas Barral < nicolas.barral at math.u-bordeaux.fr> wrote: > Thanks Matt! > > with your explanation I realized it was actually one line above in the > test. It seems that you can just specify the number of fields with: > self.plex.setNumFields(1) > Great find. I need to figure out where to put that in the documentation. Thanks, MAtt > Thanks, > > -- > Nicolas > > On 28/03/2020 17:15, Matthew Knepley wrote: > > On Sat, Mar 28, 2020 at 10:04 AM Nicolas Barral > > > > wrote: > > > > All, > > > > I hope you're all safe and fine. > > > > My question may be due to me being rusty with dmplex, please forgive > me > > > > > > Hi Nicolas! I hope everything is good Bordeaux despite the lockdown. > > > > I think I know this one. I put in support for having fields that extend > > over only part of the mesh, which some > > users really needed. This means that the DM has a description of fields > > independent from the DS and Section. > > Thus we have to ask the DM how many fields it has when creating the > > Section automatically now. I think you > > can fix this by first telling the DM that it has one field: > > > > field = PETSc.Object() > > field.setName('potential') > > dm.addField(field) > > > > > > Thanks, > > > > Matt > > > > if I missed something obvious... > > In my Firedrake code, I had the following line for a 2D mesh > > > section = createSection([1], [2, 0, 0], > perm=mesh._plex_renumbering) > > where mesh._plex_renumbering was an IS containing the mesh > renumbering. > > This created a nice section mapping the plex to a P1 field. > > > > However, with a recently installed Petsc/petsc4py, this does not work > > anymore and results in a section pointing all points of the plex to > 0. > > I investigated a bit (although I don't understand the doc for this > > function), and found a similar construction in petsc4py > > test/test_dmplex.py in 1D: > > > DIM = 1 > > > CELLS = [[0, 1], [1, 2]] > > > COORDS = [[0.], [0.5], [1.]] > > > COMP = 1 > > > DOFS = [1, 0] > > > > > > plex = PETSc.DMPlex().createFromCellList(DIM, CELLS, COORDS) > > > section = plex.createSection([COMP], [DOFS]) > > > plex.view() > > > section.view() > > > > This also results in an empty section (which makes the test fail > when I > > run it manually): > > > PetscSection Object: 1 MPI processes > > > type not yet set > > > Process 0: > > > ( 0) dim 0 offset 0 > > > ( 1) dim 0 offset 0 > > > ( 2) dim 0 offset 0 > > > ( 3) dim 0 offset 0 > > > ( 4) dim 0 offset 0 > > > > So, what did I miss ? > > > > Thanks, > > > > -- > > Nicolas > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ < > http://www.cse.buffalo.edu/~knepley/> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Mar 28 12:14:41 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Mar 2020 13:14:41 -0400 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: On Fri, Mar 27, 2020 at 10:09 AM Matthew Knepley wrote: > On Fri, Mar 27, 2020 at 3:31 AM Alejandro Aragon - 3ME < > A.M.Aragon at tudelft.nl> wrote: > >> Dear Matthew, >> >> Thanks for your email. I have attached the python code that reproduces >> the following error in my computer: >> > > I think I see the problem. There were changes in DM in order to support > fields which only occupy part of the domain. > Now you need to tell the DM about the fields before it builds a Section. I > think in your code, you only need > > f = PetscContainer() > f.setName("potential") > dm.addField(field = f) > So Nicolas Barral found a much better way to do this. You only need dm.setNumFields(1) Thanks, Matt > from > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMAddField.html > before the createSection(). > My Python may not be correct since I never use that interface. > > Thanks, > > Matt > > >> (.pydev) ? dmplex_fem mpirun -np 2 python Cpp2Python.py >> Traceback (most recent call last): >> File "Cpp2Python.py", line 383, in >> sys.exit(Cpp2Python()) >> File "Cpp2Python.py", line 357, in Cpp2Python >> dm = createfields(dm) >> File "Cpp2Python.py", line 62, in createfields >> section.setFieldName(0, "u") >> File "PETSc/Section.pyx", line 59, in >> petsc4py.PETSc.Section.setFieldName >> petsc4py.PETSc.Error: error code 63 >> [1] PetscSectionSetFieldName() line 427 in >> /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c >> [1] Argument out of range >> [1] Section field 0 should be in [0, 0) >> Traceback (most recent call last): >> File "Cpp2Python.py", line 383, in >> sys.exit(Cpp2Python()) >> File "Cpp2Python.py", line 357, in Cpp2Python >> dm = createfields(dm) >> File "Cpp2Python.py", line 62, in createfields >> section.setFieldName(0, "u") >> File "PETSc/Section.pyx", line 59, in >> petsc4py.PETSc.Section.setFieldName >> petsc4py.PETSc.Error: error code 63 >> [0] PetscSectionSetFieldName() line 427 in >> /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c >> [0] Argument out of range >> [0] Section field 0 should be in [0, 0) >> ------------------------------------------------------- >> Primary job terminated normally, but 1 process returned >> a non-zero exit code.. Per user-direction, the job has been aborted. >> ------------------------------------------------------- >> -------------------------------------------------------------------------- >> mpirun detected that one or more processes exited with non-zero status, >> thus causing >> the job to be terminated. The first process to do so was: >> >> Process name: [[23972,1],0] >> Exit code: 1 >> -------------------------------------------------------------------------- >> >> I?m using Python 3.8 and this is the output of ?pip freeze' >> >> (.pydev) ? dmplex_fem pip freeze >> cachetools==4.0.0 >> cycler==0.10.0 >> kiwisolver==1.1.0 >> llvmlite==0.31.0 >> matplotlib==3.2.1 >> mpi4py==3.0.3 >> numba==0.48.0 >> numpy==1.18.2 >> petsc==3.12.4 >> petsc4py==3.12.0 >> plexus==0.1.0 >> pyparsing==2.4.6 >> python-dateutil==2.8.1 >> scipy==1.4.1 >> six==1.14.0 >> >> I?m looking forward to getting your insight on the issue. >> Best regards, >> >> ? Alejandro >> >> >> >> On 25 Mar 2020, at 17:37, Matthew Knepley wrote: >> >> On Wed, Mar 25, 2020 at 12:29 PM Alejandro Aragon - 3ME < >> A.M.Aragon at tudelft.nl> wrote: >> >> Dear everyone, >> >> I?m new to petsc4py and I?m trying to run a simple finite element code >> that uses DMPLEX to load a .msh file (created by Gmsh). In version 3.10 the >> code was working but I recently upgraded to 3.12 and I get the following >> error: >> >> (.pydev) ? testmodule git:(e0bc9ae) ? mpirun -np 2 python >> testmodule/__main__.py >> {3: } >> {3: } >> Traceback (most recent call last): >> File "testmodule/__main__.py", line 32, in >> sys.exit(main(sys.argv)) >> File "testmodule/__main__.py", line 29, in main >> step.solve(m) >> File >> "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line >> 33, in solve >> self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 >> File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin >> petsc4py.PETSc.Error: error code 63 >> [1] MatAssemblyBegin() line 5182 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c >> [1] MatAssemblyBegin_MPIAIJ() line 810 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c >> [1] MatStashScatterBegin_Private() line 462 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >> [1] MatStashScatterBegin_BTS() line 931 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >> [1] PetscCommBuildTwoSidedFReq() line 555 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c >> [1] Argument out of range >> [1] toranks[0] 2 not in comm size 2 >> Traceback (most recent call last): >> File "testmodule/__main__.py", line 32, in >> sys.exit(main(sys.argv)) >> File "testmodule/__main__.py", line 29, in main >> step.solve(m) >> File >> "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line >> 33, in solve >> self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 >> File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin >> petsc4py.PETSc.Error: error code 63 >> [0] MatAssemblyBegin() line 5182 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c >> [0] MatAssemblyBegin_MPIAIJ() line 810 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c >> [0] MatStashScatterBegin_Private() line 462 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >> [0] MatStashScatterBegin_BTS() line 931 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >> [0] PetscCommBuildTwoSidedFReq() line 555 in >> /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c >> [0] Argument out of range >> [0] toranks[0] 2 not in comm size 2 >> ------------------------------------------------------- >> Primary job terminated normally, but 1 process returned >> a non-zero exit code.. Per user-direction, the job has been aborted. >> ------------------------------------------------------- >> -------------------------------------------------------------------------- >> mpirun detected that one or more processes exited with non-zero status, >> thus causing >> the job to be terminated. The first process to do so was: >> >> Process name: [[46994,1],0] >> Exit code: 1 >> -------------------------------------------------------------------------- >> >> >> This is in the call to assembly, which looks like this: >> >> # Begins assembling the matrix. This routine should be called after completing all calls to MatSetValues(). >> self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 >> # Completes assembling the matrix. This routine should be called after MatAssemblyBegin(). >> self.Amat.assemblyEnd(assembly=0) >> >> I would appreciate if someone can give me some insight on what has >> changed in the new version of petsc4py (or petsc for that matter) to make >> this code work again. >> >> >> It looks like you have an inconsistent build, or a memory overwrite. >> Since you are in Python, I suspect the former. Can you build >> PETSc from scratch and try this? Does it work in serial? Can you send a >> small code that reproduces this? >> >> Thanks, >> >> Matt >> >> >> Best regards, >> >> ? Alejandro >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.M.Aragon at tudelft.nl Sat Mar 28 12:22:32 2020 From: A.M.Aragon at tudelft.nl (Alejandro Aragon - 3ME) Date: Sat, 28 Mar 2020 17:22:32 +0000 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: Hi Lisandro, Thanks for your reply. Waiting for PETSc 3.13 and the matching petsc4py 3.13 would work just fine for me. It would be nice, however, to know what?s causing the issue. Best, ?Alejandro On 27 Mar 2020, at 18:39, Lisandro Dalcin > wrote: On Fri, 27 Mar 2020 at 17:10, Matthew Knepley > wrote: On Fri, Mar 27, 2020 at 3:31 AM Alejandro Aragon - 3ME > wrote: Dear Matthew, Thanks for your email. I have attached the python code that reproduces the following error in my computer: I think I see the problem. There were changes in DM in order to support fields which only occupy part of the domain. Now you need to tell the DM about the fields before it builds a Section. I think in your code, you only need f = PetscContainer() f.setName("potential") dm.addField(field = f) Except that petsc4py do not expose a Container class :-( @Alejandro, PETSc 3.13 is about to be released. Once that is done, we have a small windows to make a few enhancements in petsc4py before releasing the matching petsc4py-3.13 that you can use to upgrade your code. Would that work for you? -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.M.Aragon at tudelft.nl Sat Mar 28 12:29:55 2020 From: A.M.Aragon at tudelft.nl (Alejandro Aragon - 3ME) Date: Sat, 28 Mar 2020 17:29:55 +0000 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: References: Message-ID: <5DD8A40E-BA5F-4F37-9B67-63FC67FC4E73@tudelft.nl> Dear Matthew, Thanks for your email. I tried first what you suggested and it didn?t work. However, I actually tried also commenting the same function in Section and worked! This is the working code: def createfields(dm): """Set up the solution field""" dim = dm.getDimension() # The number of solution fields numFields = 1 dm.setNumFields(numFields) # numComp - An array of size numFields that holds the number of components for each field numComp = np.array([dim], dtype=np.int32) # numDof - An array of size numFields*(dim+1) which holds # the number of dof for each field on a mesh piece of dimension d numDof = np.zeros(numFields*(dim+1), dtype=np.int32) numDof[0] = dim # u is defined on vertices # Create a PetscSection based upon the dof layout specification provided # PetscSection: Mapping from integers in a designated range to contiguous sets of integers section = dm.createSection(numComp, numDof) # Sets the name of a field in the PetscSection, 0 is the field number and "u" is the field name section.setFieldName(0, "u") # Set the PetscSection encoding the local data layout for the DM dm.setDefaultSection(section) return dm The question I now have is whether PETSc 3.13 and the matching petsc4py will change functionality to the point that this code will no longer work. Would that be the case? Best regards, ? Alejandro On 28 Mar 2020, at 18:14, Matthew Knepley > wrote: On Fri, Mar 27, 2020 at 10:09 AM Matthew Knepley > wrote: On Fri, Mar 27, 2020 at 3:31 AM Alejandro Aragon - 3ME > wrote: Dear Matthew, Thanks for your email. I have attached the python code that reproduces the following error in my computer: I think I see the problem. There were changes in DM in order to support fields which only occupy part of the domain. Now you need to tell the DM about the fields before it builds a Section. I think in your code, you only need f = PetscContainer() f.setName("potential") dm.addField(field = f) So Nicolas Barral found a much better way to do this. You only need dm.setNumFields(1) Thanks, Matt from https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMAddField.html before the createSection(). My Python may not be correct since I never use that interface. Thanks, Matt (.pydev) ? dmplex_fem mpirun -np 2 python Cpp2Python.py Traceback (most recent call last): File "Cpp2Python.py", line 383, in sys.exit(Cpp2Python()) File "Cpp2Python.py", line 357, in Cpp2Python dm = createfields(dm) File "Cpp2Python.py", line 62, in createfields section.setFieldName(0, "u") File "PETSc/Section.pyx", line 59, in petsc4py.PETSc.Section.setFieldName petsc4py.PETSc.Error: error code 63 [1] PetscSectionSetFieldName() line 427 in /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c [1] Argument out of range [1] Section field 0 should be in [0, 0) Traceback (most recent call last): File "Cpp2Python.py", line 383, in sys.exit(Cpp2Python()) File "Cpp2Python.py", line 357, in Cpp2Python dm = createfields(dm) File "Cpp2Python.py", line 62, in createfields section.setFieldName(0, "u") File "PETSc/Section.pyx", line 59, in petsc4py.PETSc.Section.setFieldName petsc4py.PETSc.Error: error code 63 [0] PetscSectionSetFieldName() line 427 in /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c [0] Argument out of range [0] Section field 0 should be in [0, 0) ------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted. ------------------------------------------------------- -------------------------------------------------------------------------- mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[23972,1],0] Exit code: 1 -------------------------------------------------------------------------- I?m using Python 3.8 and this is the output of ?pip freeze' (.pydev) ? dmplex_fem pip freeze cachetools==4.0.0 cycler==0.10.0 kiwisolver==1.1.0 llvmlite==0.31.0 matplotlib==3.2.1 mpi4py==3.0.3 numba==0.48.0 numpy==1.18.2 petsc==3.12.4 petsc4py==3.12.0 plexus==0.1.0 pyparsing==2.4.6 python-dateutil==2.8.1 scipy==1.4.1 six==1.14.0 I?m looking forward to getting your insight on the issue. Best regards, ? Alejandro On 25 Mar 2020, at 17:37, Matthew Knepley > wrote: On Wed, Mar 25, 2020 at 12:29 PM Alejandro Aragon - 3ME > wrote: Dear everyone, I?m new to petsc4py and I?m trying to run a simple finite element code that uses DMPLEX to load a .msh file (created by Gmsh). In version 3.10 the code was working but I recently upgraded to 3.12 and I get the following error: (.pydev) ? testmodule git:(e0bc9ae) ? mpirun -np 2 python testmodule/__main__.py {3: } {3: } Traceback (most recent call last): File "testmodule/__main__.py", line 32, in sys.exit(main(sys.argv)) File "testmodule/__main__.py", line 29, in main step.solve(m) File "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line 33, in solve self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin petsc4py.PETSc.Error: error code 63 [1] MatAssemblyBegin() line 5182 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c [1] MatAssemblyBegin_MPIAIJ() line 810 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c [1] MatStashScatterBegin_Private() line 462 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [1] MatStashScatterBegin_BTS() line 931 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [1] PetscCommBuildTwoSidedFReq() line 555 in /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c [1] Argument out of range [1] toranks[0] 2 not in comm size 2 Traceback (most recent call last): File "testmodule/__main__.py", line 32, in sys.exit(main(sys.argv)) File "testmodule/__main__.py", line 29, in main step.solve(m) File "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line 33, in solve self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin petsc4py.PETSc.Error: error code 63 [0] MatAssemblyBegin() line 5182 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c [0] MatAssemblyBegin_MPIAIJ() line 810 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c [0] MatStashScatterBegin_Private() line 462 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [0] MatStashScatterBegin_BTS() line 931 in /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c [0] PetscCommBuildTwoSidedFReq() line 555 in /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c [0] Argument out of range [0] toranks[0] 2 not in comm size 2 ------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted. ------------------------------------------------------- -------------------------------------------------------------------------- mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[46994,1],0] Exit code: 1 -------------------------------------------------------------------------- This is in the call to assembly, which looks like this: # Begins assembling the matrix. This routine should be called after completing all calls to MatSetValues(). self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 # Completes assembling the matrix. This routine should be called after MatAssemblyBegin(). self.Amat.assemblyEnd(assembly=0) I would appreciate if someone can give me some insight on what has changed in the new version of petsc4py (or petsc for that matter) to make this code work again. It looks like you have an inconsistent build, or a memory overwrite. Since you are in Python, I suspect the former. Can you build PETSc from scratch and try this? Does it work in serial? Can you send a small code that reproduces this? Thanks, Matt Best regards, ? Alejandro -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Mar 28 14:09:35 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 28 Mar 2020 15:09:35 -0400 Subject: [petsc-users] [petsc4py] Assembly fails In-Reply-To: <5DD8A40E-BA5F-4F37-9B67-63FC67FC4E73@tudelft.nl> References: <5DD8A40E-BA5F-4F37-9B67-63FC67FC4E73@tudelft.nl> Message-ID: On Sat, Mar 28, 2020 at 1:30 PM Alejandro Aragon - 3ME < A.M.Aragon at tudelft.nl> wrote: > Dear Matthew, > > Thanks for your email. I tried first what you suggested and it didn?t > work. However, I actually tried also commenting the same function in > Section and worked! This is the working code: > I do not understand what you mean above. Thanks, Matt > def createfields(dm): > """Set up the solution field""" > > dim = dm.getDimension() > # The number of solution fields > numFields = 1 > dm.setNumFields(numFields) > # numComp - An array of size numFields that holds the number of components for each field > numComp = np.array([dim], dtype=np.int32) > # numDof - An array of size numFields*(dim+1) which holds > # the number of dof for each field on a mesh piece of dimension d > numDof = np.zeros(numFields*(dim+1), dtype=np.int32) > numDof[0] = dim # u is defined on vertices > > # Create a PetscSection based upon the dof layout specification provided > # PetscSection: Mapping from integers in a designated range to contiguous sets of integers > section = dm.createSection(numComp, numDof) > # Sets the name of a field in the PetscSection, 0 is the field number and "u" is the field name > section.setFieldName(0, "u") > # Set the PetscSection encoding the local data layout for the DM > dm.setDefaultSection(section) > > return dm > > > The question I now have is whether PETSc 3.13 and the matching petsc4py > will change functionality to the point that this code will no longer work. > Would that be the case? > > Best regards, > > ? Alejandro > > On 28 Mar 2020, at 18:14, Matthew Knepley wrote: > > On Fri, Mar 27, 2020 at 10:09 AM Matthew Knepley > wrote: > >> On Fri, Mar 27, 2020 at 3:31 AM Alejandro Aragon - 3ME < >> A.M.Aragon at tudelft.nl> wrote: >> >>> Dear Matthew, >>> >>> Thanks for your email. I have attached the python code that reproduces >>> the following error in my computer: >>> >> >> I think I see the problem. There were changes in DM in order to support >> fields which only occupy part of the domain. >> Now you need to tell the DM about the fields before it builds a Section. >> I think in your code, you only need >> >> f = PetscContainer() >> f.setName("potential") >> dm.addField(field = f) >> > > So Nicolas Barral found a much better way to do this. You only need > > dm.setNumFields(1) > > Thanks, > > Matt > > >> from >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMAddField.html >> >> before the createSection(). >> My Python may not be correct since I never use that interface. >> >> Thanks, >> >> Matt >> >> >>> (.pydev) ? dmplex_fem mpirun -np 2 python Cpp2Python.py >>> Traceback (most recent call last): >>> File "Cpp2Python.py", line 383, in >>> sys.exit(Cpp2Python()) >>> File "Cpp2Python.py", line 357, in Cpp2Python >>> dm = createfields(dm) >>> File "Cpp2Python.py", line 62, in createfields >>> section.setFieldName(0, "u") >>> File "PETSc/Section.pyx", line 59, in >>> petsc4py.PETSc.Section.setFieldName >>> petsc4py.PETSc.Error: error code 63 >>> [1] PetscSectionSetFieldName() line 427 in >>> /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c >>> [1] Argument out of range >>> [1] Section field 0 should be in [0, 0) >>> Traceback (most recent call last): >>> File "Cpp2Python.py", line 383, in >>> sys.exit(Cpp2Python()) >>> File "Cpp2Python.py", line 357, in Cpp2Python >>> dm = createfields(dm) >>> File "Cpp2Python.py", line 62, in createfields >>> section.setFieldName(0, "u") >>> File "PETSc/Section.pyx", line 59, in >>> petsc4py.PETSc.Section.setFieldName >>> petsc4py.PETSc.Error: error code 63 >>> [0] PetscSectionSetFieldName() line 427 in >>> /private/tmp/pip-install-laf1l3br/petsc/src/vec/is/section/interface/section.c >>> [0] Argument out of range >>> [0] Section field 0 should be in [0, 0) >>> ------------------------------------------------------- >>> Primary job terminated normally, but 1 process returned >>> a non-zero exit code.. Per user-direction, the job has been aborted. >>> ------------------------------------------------------- >>> >>> -------------------------------------------------------------------------- >>> mpirun detected that one or more processes exited with non-zero status, >>> thus causing >>> the job to be terminated. The first process to do so was: >>> >>> Process name: [[23972,1],0] >>> Exit code: 1 >>> >>> -------------------------------------------------------------------------- >>> >>> I?m using Python 3.8 and this is the output of ?pip freeze' >>> >>> (.pydev) ? dmplex_fem pip freeze >>> cachetools==4.0.0 >>> cycler==0.10.0 >>> kiwisolver==1.1.0 >>> llvmlite==0.31.0 >>> matplotlib==3.2.1 >>> mpi4py==3.0.3 >>> numba==0.48.0 >>> numpy==1.18.2 >>> petsc==3.12.4 >>> petsc4py==3.12.0 >>> plexus==0.1.0 >>> pyparsing==2.4.6 >>> python-dateutil==2.8.1 >>> scipy==1.4.1 >>> six==1.14.0 >>> >>> I?m looking forward to getting your insight on the issue. >>> Best regards, >>> >>> ? Alejandro >>> >>> >>> >>> On 25 Mar 2020, at 17:37, Matthew Knepley wrote: >>> >>> On Wed, Mar 25, 2020 at 12:29 PM Alejandro Aragon - 3ME < >>> A.M.Aragon at tudelft.nl> wrote: >>> >>> Dear everyone, >>> >>> I?m new to petsc4py and I?m trying to run a simple finite element code >>> that uses DMPLEX to load a .msh file (created by Gmsh). In version 3.10 the >>> code was working but I recently upgraded to 3.12 and I get the following >>> error: >>> >>> (.pydev) ? testmodule git:(e0bc9ae) ? mpirun -np 2 python >>> testmodule/__main__.py >>> {3: } >>> {3: } >>> Traceback (most recent call last): >>> File "testmodule/__main__.py", line 32, in >>> sys.exit(main(sys.argv)) >>> File "testmodule/__main__.py", line 29, in main >>> step.solve(m) >>> File >>> "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line >>> 33, in solve >>> self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 >>> File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin >>> petsc4py.PETSc.Error: error code 63 >>> [1] MatAssemblyBegin() line 5182 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c >>> [1] MatAssemblyBegin_MPIAIJ() line 810 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c >>> [1] MatStashScatterBegin_Private() line 462 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >>> [1] MatStashScatterBegin_BTS() line 931 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >>> [1] PetscCommBuildTwoSidedFReq() line 555 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c >>> [1] Argument out of range >>> [1] toranks[0] 2 not in comm size 2 >>> Traceback (most recent call last): >>> File "testmodule/__main__.py", line 32, in >>> sys.exit(main(sys.argv)) >>> File "testmodule/__main__.py", line 29, in main >>> step.solve(m) >>> File >>> "/Users/aaragon/Local/testmodule/testmodule/fem/analysis/static.py", line >>> 33, in solve >>> self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 >>> File "PETSc/Mat.pyx", line 1039, in petsc4py.PETSc.Mat.assemblyBegin >>> petsc4py.PETSc.Error: error code 63 >>> [0] MatAssemblyBegin() line 5182 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/interface/matrix.c >>> [0] MatAssemblyBegin_MPIAIJ() line 810 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/impls/aij/mpi/mpiaij.c >>> [0] MatStashScatterBegin_Private() line 462 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >>> [0] MatStashScatterBegin_BTS() line 931 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/mat/utils/matstash.c >>> [0] PetscCommBuildTwoSidedFReq() line 555 in >>> /private/tmp/pip-install-zurcx_6k/petsc/src/sys/utils/mpits.c >>> [0] Argument out of range >>> [0] toranks[0] 2 not in comm size 2 >>> ------------------------------------------------------- >>> Primary job terminated normally, but 1 process returned >>> a non-zero exit code.. Per user-direction, the job has been aborted. >>> ------------------------------------------------------- >>> >>> -------------------------------------------------------------------------- >>> mpirun detected that one or more processes exited with non-zero status, >>> thus causing >>> the job to be terminated. The first process to do so was: >>> >>> Process name: [[46994,1],0] >>> Exit code: 1 >>> >>> -------------------------------------------------------------------------- >>> >>> >>> This is in the call to assembly, which looks like this: >>> >>> # Begins assembling the matrix. This routine should be called after completing all calls to MatSetValues(). >>> self.Amat.assemblyBegin(assembly=0) # FINAL_ASSEMBLY = 0 >>> # Completes assembling the matrix. This routine should be called after MatAssemblyBegin(). >>> self.Amat.assemblyEnd(assembly=0) >>> >>> I would appreciate if someone can give me some insight on what has >>> changed in the new version of petsc4py (or petsc for that matter) to make >>> this code work again. >>> >>> >>> It looks like you have an inconsistent build, or a memory overwrite. >>> Since you are in Python, I suspect the former. Can you build >>> PETSc from scratch and try this? Does it work in serial? Can you send a >>> small code that reproduces this? >>> >>> Thanks, >>> >>> Matt >>> >>> >>> Best regards, >>> >>> ? Alejandro >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yyang85 at stanford.edu Sat Mar 28 20:14:33 2020 From: yyang85 at stanford.edu (Yuyun Yang) Date: Sun, 29 Mar 2020 01:14:33 +0000 Subject: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix In-Reply-To: References: <877dz4y48a.fsf@jedbrown.org>, Message-ID: Thanks for the explanations. Yes they are identical if no preconditioner is applied. A follow-up question on this: in the case of a changing matrix at every time step, it's necessary for my code to destroy and re-assemble the matrix every time, so I was wondering whether the matrix-free method, even though with no preconditioner the KSP convergence is slower, would have any advantage by saving the matrix assembly step? Or the time saved by that is insignificant compared to the slowdown in KSP with a preconditioned matrix? Thank you, Yuyun ?? Outlook for Android ________________________________ From: Mark Adams Sent: Saturday, March 28, 2020 10:29:14 PM To: Jed Brown Cc: Yuyun Yang ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix I think Yuyun is saying "....inevitable to encounter significant slowdown compared to using a fully assembled matrix with a suitable preconditioner? " You convergence rate will basically always be better with a preconditioner and unless you are solving the identity (mass matrix) then it will often be significant. Jed is responding to comparing un-preconditioned matrix vs matrix-free and you do want to check that they are identical if you think you have an exact Jacobian of a linear problem. On Sat, Mar 28, 2020 at 9:54 AM Jed Brown > wrote: If the number of iterations is (significantly) different, then you'd have to debug why your code doesn't implement the same linear operator. You can use MatComputeOperator or -ksp_view_mat_explicit to help reveal differences. If the number of iterations is the same but your code is slower, then you'd have to optimize the performance of your code. Yuyun Yang > writes: > Hello team, > > If I use KSPSolve for Ax=b using Matshell (a user-defined stencil equivalent of A) without preconditioning (since the standard PC would need information of the matrix so cannot apply it to matrix-free method), is it possible, or even inevitable to encounter significant slowdown compared to using a fully assembled matrix with a suitable preconditioner? I'm running a small example and it's already taking a long time for Matshell to converge. > > Thanks for your help, > Yuyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Mar 29 04:41:52 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 29 Mar 2020 05:41:52 -0400 Subject: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix In-Reply-To: References: <877dz4y48a.fsf@jedbrown.org> Message-ID: On Sat, Mar 28, 2020 at 9:14 PM Yuyun Yang wrote: > Thanks for the explanations. Yes they are identical if no preconditioner > is applied. > > A follow-up question on this: in the case of a changing matrix at every > time step, it's necessary for my code to destroy and re-assemble the matrix > every time, > No, > so I was wondering whether the matrix-free method, even though with no > preconditioner the KSP convergence is slower, would have any advantage by > saving the matrix assembly step? > Yes. > Or the time saved by that is insignificant compared to the slowdown in KSP > with a preconditioned matrix? > This problem dependent, but probably yes. You can also use a matrix-free operator and a stored matrix preconditioner matrix. That is why KSPSetOpertarors takes two matrices. Then you can "lag" the preconditioner and just update it when it pays off. > > Thank you, > Yuyun > > ?? Outlook for Android > > ------------------------------ > *From:* Mark Adams > *Sent:* Saturday, March 28, 2020 10:29:14 PM > *To:* Jed Brown > *Cc:* Yuyun Yang ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Subject:* Re: [petsc-users] Speed of KSPSolve using Matshell vs. regular > matrix > > I think Yuyun is saying "....inevitable to encounter significant slowdown > compared to using a fully assembled matrix *with a suitable > preconditioner*? " > > You convergence rate will basically always be better with a preconditioner > and unless you are solving the identity (mass matrix) then it will often be > significant. > > Jed is responding to comparing un-preconditioned matrix vs matrix-free and > you do want to check that they are identical if you think you have an exact > Jacobian of a linear problem. > > > On Sat, Mar 28, 2020 at 9:54 AM Jed Brown wrote: > > If the number of iterations is (significantly) different, then you'd > have to debug why your code doesn't implement the same linear operator. > You can use MatComputeOperator or -ksp_view_mat_explicit to help reveal > differences. > > If the number of iterations is the same but your code is slower, then > you'd have to optimize the performance of your code. > > Yuyun Yang writes: > > > Hello team, > > > > If I use KSPSolve for Ax=b using Matshell (a user-defined stencil > equivalent of A) without preconditioning (since the standard PC would need > information of the matrix so cannot apply it to matrix-free method), is it > possible, or even inevitable to encounter significant slowdown compared to > using a fully assembled matrix with a suitable preconditioner? I'm running > a small example and it's already taking a long time for Matshell to > converge. > > > > Thanks for your help, > > Yuyun > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yyang85 at stanford.edu Sun Mar 29 05:46:40 2020 From: yyang85 at stanford.edu (Yuyun Yang) Date: Sun, 29 Mar 2020 10:46:40 +0000 Subject: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix In-Reply-To: References: <877dz4y48a.fsf@jedbrown.org> , Message-ID: I see, thanks for the suggestion! Will try that out. ?? Outlook for Android ________________________________ From: Mark Adams Sent: Sunday, March 29, 2020 5:41:52 PM To: Yuyun Yang Cc: Jed Brown ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix On Sat, Mar 28, 2020 at 9:14 PM Yuyun Yang > wrote: Thanks for the explanations. Yes they are identical if no preconditioner is applied. A follow-up question on this: in the case of a changing matrix at every time step, it's necessary for my code to destroy and re-assemble the matrix every time, No, so I was wondering whether the matrix-free method, even though with no preconditioner the KSP convergence is slower, would have any advantage by saving the matrix assembly step? Yes. Or the time saved by that is insignificant compared to the slowdown in KSP with a preconditioned matrix? This problem dependent, but probably yes. You can also use a matrix-free operator and a stored matrix preconditioner matrix. That is why KSPSetOpertarors takes two matrices. Then you can "lag" the preconditioner and just update it when it pays off. Thank you, Yuyun ?? Outlook for Android ________________________________ From: Mark Adams > Sent: Saturday, March 28, 2020 10:29:14 PM To: Jed Brown > Cc: Yuyun Yang >; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Speed of KSPSolve using Matshell vs. regular matrix I think Yuyun is saying "....inevitable to encounter significant slowdown compared to using a fully assembled matrix with a suitable preconditioner? " You convergence rate will basically always be better with a preconditioner and unless you are solving the identity (mass matrix) then it will often be significant. Jed is responding to comparing un-preconditioned matrix vs matrix-free and you do want to check that they are identical if you think you have an exact Jacobian of a linear problem. On Sat, Mar 28, 2020 at 9:54 AM Jed Brown > wrote: If the number of iterations is (significantly) different, then you'd have to debug why your code doesn't implement the same linear operator. You can use MatComputeOperator or -ksp_view_mat_explicit to help reveal differences. If the number of iterations is the same but your code is slower, then you'd have to optimize the performance of your code. Yuyun Yang > writes: > Hello team, > > If I use KSPSolve for Ax=b using Matshell (a user-defined stencil equivalent of A) without preconditioning (since the standard PC would need information of the matrix so cannot apply it to matrix-free method), is it possible, or even inevitable to encounter significant slowdown compared to using a fully assembled matrix with a suitable preconditioner? I'm running a small example and it's already taking a long time for Matshell to converge. > > Thanks for your help, > Yuyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From berend.vanwachem at ovgu.de Mon Mar 30 06:15:59 2020 From: berend.vanwachem at ovgu.de (Berend van Wachem) Date: Mon, 30 Mar 2020 13:15:59 +0200 Subject: [petsc-users] Question about DMPLEX/P4EST with different Sections In-Reply-To: References: <63040a7a-918d-2fe1-df59-7c741a9621e1@ovgu.de> Message-ID: <2e2c38f1-536e-9f07-e610-1afeaa768ff1@ovgu.de> Dear Matt, I am still not having success with the different sections on the DMForest. The example you sent works only if the two sections are the same. But, if the first section stores the data at the cell centers and the second section stores the data at the cell faces, the code crashes. I've attached the example again with 1 change compared to the example you sent - line 134 indicates the second section should store the data at the cell face. This crashes for me, see below. Do you have any idea? Many thanks, Berend. $ workingexample -on_error_attach_debugger [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] DMPlexTransferVecTree_Interpolate line 4000 /usr/local/petsc-3.12.4/src/dm/impls/plex/plextree.c [0]PETSC ERROR: [0] DMPlexTransferVecTree line 4505 /usr/local/petsc-3.12.4/src/dm/impls/plex/plextree.c [0]PETSC ERROR: [0] DMForestTransferVec_p8est line 4829 /usr/local/petsc-3.12.4/src/dm/impls/forest/p4est/pforest.c [0]PETSC ERROR: [0] DMForestTransferVec line 997 /usr/local/petsc-3.12.4/src/dm/impls/forest/forest.c [0]PETSC ERROR: User provided function() line 0 in unknown file On 2020-03-19 23:39, Matthew Knepley wrote: > Okay this runs for me. > > ? Thanks, > > ? ? Matt > > On Thu, Mar 19, 2020 at 6:07 PM Matthew Knepley > wrote: > > On Fri, Mar 13, 2020 at 9:45 AM Berend van Wachem > > wrote: > > Dear Matt, > > Thanks for your response. My understanding of the DM and DMClone > is the > same - and I have tested this with a DMPLEX DM without problems. > > However, for some reason, I cannot change/set the section of a > P4EST dm. > In the attached example code, I get an error in line 140, where > I try to > create a new section from the cloned P4EST DM. Is it not > possible to > create/set a section on a P4EST DM? Or maybe I am doing > something else > wrong? Do you suggest a workaround? > > > Hi Berend, > > Sorry I am behind. The problem on line 140 is that you call a DMPlex > function (DMPlexCreateSection) > with a DMForest object. That is illegal. You can, however, call that > function using the Plex you get from > a Forest using DMConvert(DMForestClone, DMPLEX, &plexClone). I will > get your code running as soon > as I can, but after you create the Section, attaching it should be fine. > > ? Thanks, > > ? ? ?Matt > > Many thanks, Berend. > > > On 2020-03-13 00:19, Matthew Knepley wrote: > > On Thu, Mar 12, 2020 at 7:40 AM Berend van Wachem > > > >> wrote: > > > >? ? ?Dear All, > > > >? ? ?I have started to use DMPLEX with P4EST for a > computational fluid > >? ? ?dynamics application.?I am solving a coupled system of 4 > discretised > >? ? ?equations (for 3 velocity components and one pressure) on > a mesh. > >? ? ?However, next to these 4 variables, I also have a few > single field > >? ? ?variables (such as density and viscosity) defined over > the mesh, > >? ? ?which I > >? ? ?don't solve for (they should not be part of the matrix > with unknowns). > >? ? ?Most of these variables are at the cell centers, but in a > few cases, it > >? ? ?want to define them at cell faces. > > > >? ? ?With just DMPLEX, I solve this by: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMCreateGlobalVector (and Matrix), so I get an Unknown > vector, a RHS > >? ? ?vector, and a matrix for the 4 variables. > > > >? ? ?To get a vector for a single variable at the cell center > or the cell > >? ? ?face, I clone the original DM, I define a new Section on > it, and then > >? ? ?create the vector from that which I need (e.g. for density, > >? ? ?viscosity or > >? ? ?a velocity at the cell face). > > > >? ? ?Then I loop over the mesh, and with MatSetValuesLocal, I > set the > >? ? ?coefficients. After that, I solve the system for multiple > timesteps > >? ? ?(sequential solves) and get the solution vector with the > 4 variables > >? ? ?after each solve. > > > >? ? ?So-far, this works fine with DMPLEX. However, now I want > to use P4EST, > >? ? ?and I have difficulty defining a variable vector other > than the > >? ? ?original 4. > > > >? ? ?I have changed the code structure: > > > >? ? ?DMPlexCreateMesh, so I get an initial DM > >? ? ?DMPlexCreateSection, indicating the need for 4 variables > >? ? ?DMSetLocalSection > >? ? ?DMForestSetBaseDM(DM, DMForest) to create a DMForest > >? ? ?DMCreateGlobalVector (and Matrix), so I get a Unknown > vector, a RHS > >? ? ?vector, and a matrix for the 4 variables > > > >? ? ?then I perform multiple time-steps, > >? ? ? ? ?DMForestTemplate(DMForest -> ?DMForestPost) > >? ? ? ? ?Adapt DMForestPost > >? ? ? ? ?DMCreateGlovalVector(DMForestPost, RefinedUnknownVector) > >? ? ? ? ?DMForestTransferVec(UnknownVector , RefinedUnknownVector) > >? ? ? ? ?DMForestPost -> DMForest > >? ? ?and then DMConvert(DMForest,DMPLEX,DM) > >? ? ?and I can solve the system as usual. That also seems to work. > > > >? ? ?But my conceptual question: how can I convert the other > variable > >? ? ?vectors > >? ? ?(obtained with a different section on the same DM) such > as density and > >? ? ?viscosity and faceVelocity within this framework? > > > > > > Here is my current thinking about DMs. A DM is a function space > > overlaying a topology. Much to my dismay, we > > do not have a topology object, so it hides inside DM. > DMClone() creates > > a shallow copy of the topology. We use > > this to have any number of data layouts through PetscSection, > laying > > over the same underlying topology. > > > > So for each layout you have, make a separate clone. Then > things like > > TransferVec() will respond to the layout in > > that clone. Certainly it works this way in Plex. I admit to > not having > > tried this for TransferVec(), but let me know if > > you have any problems. > > > > BTW, I usually use a dm for the solution, which I give to the > solver, > > say SNESSetDM(snes, dm), and then clone > > it as dmAux which has the layout for all the auxiliary fields > that are > > not involved in the solve. The Plex examples > > all use this form. > > > >? ? Thanks, > > > >? ? ? ?Matt > > > >? ? ?The DMForest has the same Section as the original DM and > will thus have > >? ? ?the space for exactly 4 variables per cell. I tried > pushing another > >? ? ?section on the DMForest and DMForestPost, but that does > not seem to > >? ? ?work. Please find attached a working example with code to > do this, > >? ? ?but I > >? ? ?get the error: > > > >? ? ?PETSC ERROR: PetscSectionGetChart() line 513 in > > > ?/usr/local/petsc-3.12.4/src/vec/is/section/interface/section.c > Wrong > >? ? ?type of object: Parameter # 1 > > > >? ? ?So, I is there a way to DMForestTransferVec my other > vectors from one > >? ? ?DMForest to DMForestPost. How can I do this? > > > >? ? ?Many thanks for your help! > > > >? ? ?Best wishes, Berend. > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results > to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- A non-text attachment was scrubbed... Name: dmplexp4est2.c Type: text/x-csrc Size: 7537 bytes Desc: not available URL: From balay at mcs.anl.gov Mon Mar 30 09:40:53 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 30 Mar 2020 09:40:53 -0500 (CDT) Subject: [petsc-users] PETSc 3.12 release Message-ID: We are pleased to announce the release of PETSc version 3.13 at http://www.mcs.anl.gov/petsc The major changes and updates can be found at http://www.mcs.anl.gov/petsc/documentation/changes/313.html The final update to petsc-3.12 i.e petsc-3.12.5 is also available We recommend upgrading to PETSc 3.13 soon. As always, please report problems to petsc-maint at mcs.anl.gov and ask questions at petsc-users at mcs.anl.gov This release includes contributions from Alp Dener Barry Smith Betrie, Getnet Brandon Whitchurch Claas Abert Fande Kong Florian Getnet Betrie Hannah Morgan Hansol Suh Hong Zhang Jacob Faibussowitsch Jed Brown Jeremy L Thompson Jose Roman Junchao Zhang Klaus Zimmermann Lawrence Mitchell Lisandro Dalcin marius Martin Diehl Matthew Knepley Mr. Hong Zhang Patrick Sanan Peter Hill Pierre Jolivet Richard Tran Mills Sajid Ali Satish Balay Scott Kruger Scott MacLachlan Shrirang Abhyankar Stefano Zampini Steve Benbow Toby Isaac Tyler Chen Vaclav Hapla Valeria Barra and bug reports/patches/proposed improvements received from "Abhyankar, Shrirang G" Arash Mehraban Barry Smith Benjamin Bugeat Brandon Denton Cameron Smith Carl Hall Claudio Tomasi "David, Cedric H (US 329F)" Dipankar Dwivedi Dmitry Melnichuk Ed Bueler Emmanuel Ayala Eric Chamberland Fande Kong Frederic Vi Gautam Bisht Guangming Wang Hong Zhang Ivan Blagopoluchnyy Jacob Faibussowitsch James Ramsey James Wright Jan Grie?er Jaysaval, Piyoosh Jed Brown Jeremy Thompson Jin Chen jordic Jose E. Roman Junchao Zhang Justin Herter Kai Germaschewski Lisandro Dalcin Marius Buerkle Mark Adams Mark Lohry Martin Diehl Matthew Knepley "McInnes, Lois Curfman" Mehmet Sahin Miguel Fosas de Pando Miguel Salazar De Troya mphysx team "Nourgaliev, Robert Nr" "Ofori-Opoku, Nana" PAEZ ESPEJO Miguel-Angel EUROGICIEL INGENIERIE Patrick Zulian Paul T. Bauman Philippe Billuart Pierre Jolivet Ramin Moghadasi Randall Mackie Richard Tran Mills Sajid Ali Sam Guo Satish Balay Scott Kruger S?bastien Gilles Thibaut Appel Toby Isaac Tomas Mondragon Valeria Barra Victor Eijkhout Xiaodong Liu As always, thanks for your support, Satish, on behalf of PETSc team From balay at mcs.anl.gov Mon Mar 30 09:57:44 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 30 Mar 2020 09:57:44 -0500 (CDT) Subject: [petsc-users] PETSc 3.13 release In-Reply-To: References: Message-ID: Sorry - this subject should have said: PETSc 3.13 release Satish On Mon, 30 Mar 2020, Satish Balay wrote: > > We are pleased to announce the release of PETSc version 3.13 at http://www.mcs.anl.gov/petsc > > The major changes and updates can be found at http://www.mcs.anl.gov/petsc/documentation/changes/313.html > > The final update to petsc-3.12 i.e petsc-3.12.5 is also available > > We recommend upgrading to PETSc 3.13 soon. As always, please report problems to petsc-maint at mcs.anl.gov and ask questions at petsc-users at mcs.anl.gov > > This release includes contributions from > > Alp Dener > Barry Smith > Betrie, Getnet > Brandon Whitchurch > Claas Abert > Fande Kong > Florian > Getnet Betrie > Hannah Morgan > Hansol Suh > Hong Zhang > Jacob Faibussowitsch > Jed Brown > Jeremy L Thompson > Jose Roman > Junchao Zhang > Klaus Zimmermann > Lawrence Mitchell > Lisandro Dalcin > marius > Martin Diehl > Matthew Knepley > Mr. Hong Zhang > Patrick Sanan > Peter Hill > Pierre Jolivet > Richard Tran Mills > Sajid Ali > Satish Balay > Scott Kruger > Scott MacLachlan > Shrirang Abhyankar > Stefano Zampini > Steve Benbow > Toby Isaac > Tyler Chen > Vaclav Hapla > Valeria Barra > > > and bug reports/patches/proposed improvements received from > > > "Abhyankar, Shrirang G" > Arash Mehraban > Barry Smith > Benjamin Bugeat > Brandon Denton > Cameron Smith > Carl Hall > Claudio Tomasi > "David, Cedric H (US 329F)" > Dipankar Dwivedi > Dmitry Melnichuk > Ed Bueler > Emmanuel Ayala > Eric Chamberland > Fande Kong > Frederic Vi > Gautam Bisht > Guangming Wang > Hong Zhang > Ivan Blagopoluchnyy > Jacob Faibussowitsch > James Ramsey > James Wright > Jan Grie?er > Jaysaval, Piyoosh > Jed Brown > Jeremy Thompson > Jin Chen > jordic > Jose E. Roman > Junchao Zhang > Justin Herter > Kai Germaschewski > Lisandro Dalcin > Marius Buerkle > Mark Adams > Mark Lohry > Martin Diehl > Matthew Knepley > "McInnes, Lois Curfman" > Mehmet Sahin > Miguel Fosas de Pando > Miguel Salazar De Troya > mphysx team > "Nourgaliev, Robert Nr" > "Ofori-Opoku, Nana" > PAEZ ESPEJO Miguel-Angel EUROGICIEL INGENIERIE > Patrick Zulian > Paul T. Bauman > Philippe Billuart > Pierre Jolivet > Ramin Moghadasi > Randall Mackie > Richard Tran Mills > Sajid Ali > Sam Guo > Satish Balay > Scott Kruger > S?bastien Gilles > Thibaut Appel > Toby Isaac > Tomas Mondragon > Valeria Barra > Victor Eijkhout > Xiaodong Liu > > As always, thanks for your support, > > Satish, on behalf of PETSc team From rlmackie862 at gmail.com Mon Mar 30 10:45:40 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Mon, 30 Mar 2020 08:45:40 -0700 Subject: [petsc-users] duplicate PETSC options Message-ID: <64A0D637-2214-458B-AF49-33ADFB3D2B58@gmail.com> When PETSc reads in a list of options (like PetscOptionsGetReal, etc), we have noticed that if there are duplicate entries, that PETSc takes the last one entered as the option to use. This can happen if the user didn?t notice there were two lines with the same options name (but different values set). Is there someway to have PETSc check for duplicate entries so that we can stop program execution and warn the user? Thanks, Randy M From knepley at gmail.com Mon Mar 30 11:15:50 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 30 Mar 2020 12:15:50 -0400 Subject: [petsc-users] duplicate PETSC options In-Reply-To: <64A0D637-2214-458B-AF49-33ADFB3D2B58@gmail.com> References: <64A0D637-2214-458B-AF49-33ADFB3D2B58@gmail.com> Message-ID: On Mon, Mar 30, 2020 at 11:46 AM Randall Mackie wrote: > When PETSc reads in a list of options (like PetscOptionsGetReal, etc), we > have noticed that if there are duplicate entries, that PETSc takes the last > one entered as the option to use. This can happen if the user didn?t notice > there were two lines with the same options name (but different values set). > > Is there someway to have PETSc check for duplicate entries so that we can > stop program execution and warn the user? > We could add an option. We make heavy use of this behavior in order to override options previously specified (which act as defaults). Could you create an issue for it? Thanks, Matt > Thanks, > > Randy M -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lu_qin_2000 at yahoo.com Mon Mar 30 11:56:21 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Mon, 30 Mar 2020 16:56:21 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: Message-ID: <262299643.779351.1585587381138@mail.yahoo.com> Hello, I am trying to build Petsc-3.4.2 in my Windows-10 workstation using Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The configuration/compilation/installation seem to finish without problem, but test program (ex19) failed since it could not find a shared lib. Then I linked the libpetsc.lib with my program (in Fortran-90), but it got run time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other Petsc subroutines. Note that this package was built, tested and worked well with the same Fortran-90 program in my Windows-7 workstation.? Also tried Petsc-3.12.4 but got the same errors. The following is my configuration: =============== ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 =============== The error message of running ex19 is: ================= $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit test Running test examples to verify correct installation Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit and PETSC_ARCH=arch-win64-debug Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process See http://www.mcs.anl.gov/petsc/documentation/faq.html C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory ================= Thanks a lot for any suggestions. Best Regards, Qin ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Mon Mar 30 12:01:53 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Mon, 30 Mar 2020 10:01:53 -0700 Subject: [petsc-users] duplicate PETSC options In-Reply-To: References: <64A0D637-2214-458B-AF49-33ADFB3D2B58@gmail.com> Message-ID: <6347CEC3-DCD4-4137-A4CA-18A495139D4E@gmail.com> Hi Matt, Yes I just submitted an issue. Thanks very much. Randy M. > On Mar 30, 2020, at 9:15 AM, Matthew Knepley wrote: > > On Mon, Mar 30, 2020 at 11:46 AM Randall Mackie > wrote: > When PETSc reads in a list of options (like PetscOptionsGetReal, etc), we have noticed that if there are duplicate entries, that PETSc takes the last one entered as the option to use. This can happen if the user didn?t notice there were two lines with the same options name (but different values set). > > Is there someway to have PETSc check for duplicate entries so that we can stop program execution and warn the user? > > We could add an option. We make heavy use of this behavior in order to override options previously specified (which act as defaults). > Could you create an issue for it? > > Thanks, > > Matt > > Thanks, > > Randy M > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Mar 30 12:26:06 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 30 Mar 2020 12:26:06 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <262299643.779351.1585587381138@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> Message-ID: MPICH is unsupported - and we haven't tested with it for a long time. And petsc-3.4.2 is from 2013 - and untested with current gen os/compilers/libraries. Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? We recommend 64bit MSMPI for windows. Satish On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > Hello, > I am trying to build Petsc-3.4.2 in my Windows-10 workstation using Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The configuration/compilation/installation seem to finish without problem, but test program (ex19) failed since it could not find a shared lib. Then I linked the libpetsc.lib with my program (in Fortran-90), but it got run time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other Petsc subroutines. Note that this package was built, tested and worked well with the same Fortran-90 program in my Windows-7 workstation.? > > Also tried Petsc-3.12.4 but got the same errors. > > The following is my configuration: > > > =============== > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > =============== > > > The error message of running ex19 is: > > > ================= > > $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit test > > Running test examples to verify correct installation > > Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit and PETSC_ARCH=arch-win64-debug > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > ================= > > > Thanks a lot for any suggestions. > > > Best Regards, > > Qin > > ? > > ? > From fdkong.jd at gmail.com Mon Mar 30 12:25:07 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Mon, 30 Mar 2020 11:25:07 -0600 Subject: [petsc-users] AIJ vs BAIJ when using ILU factorization Message-ID: Hi All, There is a system of equations arising from the discretization of 3D incompressible Navier-Stoke equations using a finite element method. 4 unknowns are placed on each mesh point, and then there is a 4x4 saddle point block on each mesh vertex. I was thinking to solve the linear equations using an incomplete LU factorization (that will be eventually used as a subdomain solver for ASM). Right now, I am trying to study the ILU performance using AIJ and BAIJ, respectively. From my understanding, BAIJ should give me better results since it inverses the 4x4 blocks exactly, while AIJ does not. However, I found that both BAIJ and AIJ gave me identical results in terms of the number of iterations. Was that just coincident? Or in theory, they are just identical. I understand the runtimes may be different because BAIJ has a better data locality. Please see the attached files for the results and solver configuration. Thanks, Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: aij.result Type: application/octet-stream Size: 12893 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: baij.result Type: application/octet-stream Size: 12818 bytes Desc: not available URL: From balay at mcs.anl.gov Mon Mar 30 13:47:43 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 30 Mar 2020 13:47:43 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <1342653823.808354.1585593391072@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> Message-ID: Please preserve cc: to the list > shared libraries: disabled So PETSc is correctly built as static. > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory So its not clear which shared library this error is referring to. But then - this error was with petsc-3.4.2 You can always try to run the code manually without mpiexec - and see if that works. cd src/ksp/ksp/examples/tutorials make ex2 ./ex2 Wrt MSMPI - yes its free to download And PETSc does work with Intel-MPI. It might be a separate download/install. [so I can't say if what you have is the correct install of IntelMPI or not] Check the builds we use for testing - for ex: config/examples/arch-ci-mswin-*.py Satish On Mon, 30 Mar 2020, Qin Lu wrote: > Hi Satish, > The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > Is the MSMPI free to use in Windows-10? > Does Petsc support Intel-MPI? I have it in my machine, but for some reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include subdirectory of it. > Thanks a lot for your help.Qin > On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay wrote: > > MPICH is unsupported - and we haven't tested with it for a long time. > > And petsc-3.4.2 is from 2013 - and untested with current gen os/compilers/libraries. > > Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > We recommend 64bit MSMPI for windows. > > Satish > > On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > Hello, > > I am trying to build Petsc-3.4.2 in my Windows-10 workstation using Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The configuration/compilation/installation seem to finish without problem, but test program (ex19) failed since it could not find a shared lib. Then I linked the libpetsc.lib with my program (in Fortran-90), but it got run time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other Petsc subroutines. Note that this package was built, tested and worked well with the same Fortran-90 program in my Windows-7 workstation.? > >? > > Also tried Petsc-3.12.4 but got the same errors. > >? > > The following is my configuration: > > > >? > > =============== > >? > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > >? > > =============== > > > >? > > The error message of running ex19 is: > > > >? > > ================= > >? > > $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit test > >? > > Running test examples to verify correct installation > >? > > Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit and PETSC_ARCH=arch-win64-debug > >? > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > >? > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > >? > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > >? > > ================= > > > >? > > Thanks a lot for any suggestions. > > > >? > > Best Regards, > >? > > Qin > >? > >? ? > >? > >? ? > >? ? From lu_qin_2000 at yahoo.com Mon Mar 30 15:43:27 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Mon, 30 Mar 2020 20:43:27 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> Message-ID: <92682649.852381.1585601007985@mail.yahoo.com> Hi Satish, The ex2.exe works with "mpiexec -np 2" when I ran it from command line. Then I ran "which mpiexec", it actually points to Intel-MPI instead of MPICH2, probably because I have set the former's path in environment variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc with Intel-MPI. As for the crash of calling to?KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in my Fortran-90 program, do you have any idea what can be wrong? Can it be related to MPI? I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got the following output: ============python ./arch-ci-mswin-intel.pyTraceback (most recent call last):? File "./arch-ci-mswin-intel.py", line 10, in ??? import configureImportError: No module named configure============ Thanks,Qin I will try to use Intel-MPI and see what will happen. Thanks,Qin On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay wrote: Please preserve cc: to the list >? shared libraries: disabled So PETSc? is correctly built as static. > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory So its not clear which shared library this error is referring to. But then - this error was with petsc-3.4.2 You can always try to run the code manually without mpiexec - and see if that works. cd src/ksp/ksp/examples/tutorials make ex2 ./ex2 Wrt MSMPI - yes its free to download And PETSc does work with Intel-MPI. It might be a separate download/install. [so I can't say if what you have is the correct install of IntelMPI or not] Check the builds we use for testing - for ex: config/examples/arch-ci-mswin-*.py Satish On Mon, 30 Mar 2020, Qin Lu wrote: >? Hi Satish, > The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > Is the MSMPI free to use in Windows-10? > Does Petsc support Intel-MPI? I have it in my machine, but for some reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include subdirectory of it. > Thanks a lot for your help.Qin >? ? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay wrote:? >? >? MPICH is unsupported - and we haven't tested with it for a long time. > > And petsc-3.4.2 is from 2013 - and untested with current gen os/compilers/libraries. > > Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > We recommend 64bit MSMPI for windows. > > Satish > > On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > Hello, > > I am trying to build Petsc-3.4.2 in my Windows-10 workstation using Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The configuration/compilation/installation seem to finish without problem, but test program (ex19) failed since it could not find a shared lib. Then I linked the libpetsc.lib with my program (in Fortran-90), but it got run time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other Petsc subroutines. Note that this package was built, tested and worked well with the same Fortran-90 program in my Windows-7 workstation.? > >? > > Also tried Petsc-3.12.4 but got the same errors. > >? > > The following is my configuration: > > > >? > > =============== > >? > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > >? > > =============== > > > >? > > The error message of running ex19 is: > > > >? > > ================= > >? > > $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit test > >? > > Running test examples to verify correct installation > >? > > Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit and PETSC_ARCH=arch-win64-debug > >? > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > >? > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > >? > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > >? > > ================= > > > >? > > Thanks a lot for any suggestions. > > > >? > > Best Regards, > >? > > Qin > >? > >? ? > >? > >? ? > >? ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Mon Mar 30 16:12:39 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Mon, 30 Mar 2020 23:12:39 +0200 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <87a7472kcb.fsf@jedbrown.org> References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> <87a7472kcb.fsf@jedbrown.org> Message-ID: <303b4e46-3a7e-b418-26cf-d44e8a79192b@gmail.com> Dear Jed, Thanks I will try to keep you updated. So I managed to get my stuff running with one process but it fails with more than one. The matrix assembly as you described it works fine but I forgot that in the FCT algorithm one also needs to assemble a matrix D that looks kinda like D_ij = L_ij * (u[i] - u[j]) where u is a vector with solution data from a previous time step. So the problem is that I have to now access off-process values of the Vector u, and doing this for each row of D. My basic first idea was to utilize the VecScatter Example from the Petsc Manual to get the off-process values per row of L to build this matrix D. But somehow this does not really work. Any ideas on how to scatter the Vector-data so that I can get the correct Vector values ? At the moment I'm using something along this lines (in a loop where I get the local array of u with VecGetArray --> u_arr) MatGetRow(L, row, &ncolsL, &colsL, &valsL); double * Dvals = new double[ncolsM](); int * idx_to = new int[ncolsL](); for(int k=0; k < ncolsM; k++) ??? idx_to[k] = k; MPI_Barrier(PETSC_COMM_WORLD); //Scatter the right stuff Vec x; VecScatter scatter; IS from, to; double *vals; VecCreateSeq(PETSC_COMM_SELF, ncolsL, &x); ISCreateGeneral(PETSC_COMM_SELF,ncolsL,colsL,PETSC_COPY_VALUES,&from); ISCreateGeneral(PETSC_COMM_SELF,ncolsL,idx_to,PETSC_COPY_VALUES,&to); VecScatterCreate(u,from,x,to,&scatter); VecScatterBegin(scatter,u,x,INSERT_VALUES,SCATTER_FORWARD); VecScatterEnd(scatter,u,x,INSERT_VALUES,SCATTER_FORWARD); VecGetArray(x,&vals); for(int c=0; c < ncolsL; c++) { ??? Dvals[c] = valsL[c] * (vals[c] -? u_arr[row]) } MatSetValues(D, 1, &row, ncolsL, colsL, Dvals, ADD_VALUES); and so on... On 23/03/2020 15:53, Jed Brown wrote: > Thanks; please don't drop the list. > > I'd be curious whether this operation is common enough that we should > add it to PETSc. My hesitance has been that people may want many > different variants when working with systems of equations, for example. > > Elias Karabelas writes: > >> Dear Jed, >> >> Yes the Matrix A comes from assembling a FEM-convection-diffusion >> operator over a tetrahedral mesh. So my matrix graph should be >> symmetric. Thanks for the snippet >> >> On 23/03/2020 15:42, Jed Brown wrote: >>> Elias Karabelas writes: >>> >>>> Dear Users, >>>> >>>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>>> To this end I have amongst other things create a Matrix whose entries >>>> are given by >>>> >>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>> >>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>> >>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>>> >>>> I was wondering how to do this. My first search brought me to >>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>> >>>> >>>> but this just goes over the rows of one matrix to set new values and now >>>> I would need to run over the rows and columns of the matrix. My Idea was >>>> to just create a transpose of A and do the same but then the row-layout >>>> will be different and I can't use the same for loop for A and AT and >>>> thus also won't be able to calculate the max's above. >>> Does your matrix have symmetric nonzero structure? (It's typical for >>> finite element methods.) >>> >>> If so, all the indices will match up so I think you can do something like: >>> >>> for (row=rowstart; row>> PetscScalar Lvals[MAX_LEN]; >>> PetscInt diag; >>> MatGetRow(A, row, &ncols, &cols, &vals); >>> MatGetRow(At, row, &ncolst, &colst, &valst); >>> assert(ncols == ncolst); // symmetric structure >>> PetscScalar sum = 0; >>> for (c=0; c>> assert(cols[c] == colst[c]); // symmetric structure >>> if (cols[c] == row) diag = c; >>> else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); >>> } >>> Lvals[diag] = sum; >>> MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); >>> MatRestoreRow(A, row, &ncols, &cols, &vals); >>> MatRestoreRow(At, row, &ncolst, &colst, &valst); >>> } From knepley at gmail.com Mon Mar 30 16:14:57 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 30 Mar 2020 17:14:57 -0400 Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <92682649.852381.1585601007985@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> Message-ID: On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi Satish, > > The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > Then I ran "which mpiexec", it actually points to Intel-MPI instead of > MPICH2, probably because I have set the former's path in environment > variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > with Intel-MPI. > > As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > my Fortran-90 program, do you have any idea what can be wrong? Can it be > related to MPI? > > I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > the following output: > > ============ > python ./arch-ci-mswin-intel.py > Traceback (most recent call last): > File "./arch-ci-mswin-intel.py", line 10, in > import configure > ImportError: No module named configure > ============ > You have to run those from $PETSC_DIR. Matt > Thanks, > Qin > > > > I will try to use Intel-MPI and see what will happen. > > Thanks, > Qin > > On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > balay at mcs.anl.gov> wrote: > > > Please preserve cc: to the list > > > shared libraries: disabled > > So PETSc is correctly built as static. > > > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > libraries: ?: cannot open shared object file: No such file or directory > > So its not clear which shared library this error is referring to. But then > - this error was with petsc-3.4.2 > > You can always try to run the code manually without mpiexec - and see if > that works. > > cd src/ksp/ksp/examples/tutorials > make ex2 > ./ex2 > > Wrt MSMPI - yes its free to download > > And PETSc does work with Intel-MPI. It might be a separate > download/install. [so I can't say if what you have is the correct install > of IntelMPI or not] > > Check the builds we use for testing - for ex: > config/examples/arch-ci-mswin-*.py > > Satish > > On Mon, 30 Mar 2020, Qin Lu wrote: > > > Hi Satish, > > The configure.log and RDict.log of Petsc-3.12.4 build is attached. > > Is the MSMPI free to use in Windows-10? > > Does Petsc support Intel-MPI? I have it in my machine, but for some > reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > subdirectory of it. > > Thanks a lot for your help.Qin > > On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > balay at mcs.anl.gov> wrote: > > > > MPICH is unsupported - and we haven't tested with it for a long time. > > > > And petsc-3.4.2 is from 2013 - and untested with current gen > os/compilers/libraries. > > > > Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > > We recommend 64bit MSMPI for windows. > > > > Satish > > > > On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > Hello, > > > I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > configuration/compilation/installation seem to finish without problem, but > test program (ex19) failed since it could not find a shared lib. Then I > linked the libpetsc.lib with my program (in Fortran-90), but it got run > time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > Petsc subroutines. Note that this package was built, tested and worked well > with the same Fortran-90 program in my Windows-7 workstation. > > > > > > Also tried Petsc-3.12.4 but got the same errors. > > > > > > The following is my configuration: > > > > > > > > > =============== > > > > > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > --with-blas-lapack-dir="/cygdrive/c/Program Files > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > =============== > > > > > > > > > The error message of running ex19 is: > > > > > > > > > ================= > > > > > > $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > test > > > > > > Running test examples to verify correct installation > > > > > > Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > and PETSC_ARCH=arch-win64-debug > > > > > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > MPI process > > > > > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > libraries: ?: cannot open shared object file: No such file or directory > > > > > > ================= > > > > > > > > > Thanks a lot for any suggestions. > > > > > > > > > Best Regards, > > > > > > Qin > > > > > > > > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lu_qin_2000 at yahoo.com Mon Mar 30 20:27:33 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Tue, 31 Mar 2020 01:27:33 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> Message-ID: <1508446702.952954.1585618053161@mail.yahoo.com> Hi, I installed Intel-MPI 2019, and configured petsc-3.12.4 using --with-mpi-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to use --with-mpi-include and --with-mpi-lib, still didn't work. The config.log is attached. The following is my configuration:=============== ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/Program Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 ============= Thanks for any suggestions. Regards, Qin On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley wrote: On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users wrote: Hi Satish, The ex2.exe works with "mpiexec -np 2" when I ran it from command line. Then I ran "which mpiexec", it actually points to Intel-MPI instead of MPICH2, probably because I have set the former's path in environment variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc with Intel-MPI. As for the crash of calling to?KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in my Fortran-90 program, do you have any idea what can be wrong? Can it be related to MPI? I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got the following output: ============python ./arch-ci-mswin-intel.pyTraceback (most recent call last):? File "./arch-ci-mswin-intel.py", line 10, in ??? import configureImportError: No module named configure============ You have to run those from $PETSC_DIR. ? Matt? Thanks,Qin I will try to use Intel-MPI and see what will happen. Thanks,Qin On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay wrote: Please preserve cc: to the list >? shared libraries: disabled So PETSc? is correctly built as static. > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory So its not clear which shared library this error is referring to. But then - this error was with petsc-3.4.2 You can always try to run the code manually without mpiexec - and see if that works. cd src/ksp/ksp/examples/tutorials make ex2 ./ex2 Wrt MSMPI - yes its free to download And PETSc does work with Intel-MPI. It might be a separate download/install. [so I can't say if what you have is the correct install of IntelMPI or not] Check the builds we use for testing - for ex: config/examples/arch-ci-mswin-*.py Satish On Mon, 30 Mar 2020, Qin Lu wrote: >? Hi Satish, > The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > Is the MSMPI free to use in Windows-10? > Does Petsc support Intel-MPI? I have it in my machine, but for some reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include subdirectory of it. > Thanks a lot for your help.Qin >? ? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay wrote:? >? >? MPICH is unsupported - and we haven't tested with it for a long time. > > And petsc-3.4.2 is from 2013 - and untested with current gen os/compilers/libraries. > > Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > We recommend 64bit MSMPI for windows. > > Satish > > On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > Hello, > > I am trying to build Petsc-3.4.2 in my Windows-10 workstation using Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The configuration/compilation/installation seem to finish without problem, but test program (ex19) failed since it could not find a shared lib. Then I linked the libpetsc.lib with my program (in Fortran-90), but it got run time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other Petsc subroutines. Note that this package was built, tested and worked well with the same Fortran-90 program in my Windows-7 workstation.? > >? > > Also tried Petsc-3.12.4 but got the same errors. > >? > > The following is my configuration: > > > >? > > =============== > >? > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > >? > > =============== > > > >? > > The error message of running ex19 is: > > > >? > > ================= > >? > > $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit test > >? > > Running test examples to verify correct installation > >? > > Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit and PETSC_ARCH=arch-win64-debug > >? > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > >? > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > >? > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > >? > > ================= > > > >? > > Thanks a lot for any suggestions. > > > >? > > Best Regards, > >? > > Qin > >? > >? ? > >? > >? ? > >? ?? -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: configure.log URL: From knepley at gmail.com Mon Mar 30 21:03:39 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 30 Mar 2020 22:03:39 -0400 Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <1508446702.952954.1585618053161@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> Message-ID: On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > Hi, > > I installed Intel-MPI 2019, and configured petsc-3.12.4 using > --with-mpi-dir="/cygdrive/c/Program Files > (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > use --with-mpi-include and --with-mpi-lib, still didn't work. The > config.log is attached. > > The following is my configuration: > =============== > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > --with-blas-lapack-dir="/cygdrive/c/Program Files > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > --with-mpi-include="/cygdrive/c/Program Files > (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib" --with- > mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > --with-xt=0 --with-shared-libraries=0 > > ============= > > Thanks for any suggestions. > We just cannot cope with spaces in paths. Can you use the shortened contiguous name instead of "Program File"? Thanks, Matt > Regards, > > Qin > > > > > > On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > knepley at gmail.com> wrote: > > > On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Hi Satish, > > The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > Then I ran "which mpiexec", it actually points to Intel-MPI instead of > MPICH2, probably because I have set the former's path in environment > variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > with Intel-MPI. > > As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > my Fortran-90 program, do you have any idea what can be wrong? Can it be > related to MPI? > > I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > the following output: > > ============ > python ./arch-ci-mswin-intel.py > Traceback (most recent call last): > File "./arch-ci-mswin-intel.py", line 10, in > import configure > ImportError: No module named configure > ============ > > > You have to run those from $PETSC_DIR. > > Matt > > > Thanks, > Qin > > > > I will try to use Intel-MPI and see what will happen. > > Thanks, > Qin > > On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > balay at mcs.anl.gov> wrote: > > > Please preserve cc: to the list > > > shared libraries: disabled > > So PETSc is correctly built as static. > > > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > libraries: ?: cannot open shared object file: No such file or directory > > So its not clear which shared library this error is referring to. But then > - this error was with petsc-3.4.2 > > You can always try to run the code manually without mpiexec - and see if > that works. > > cd src/ksp/ksp/examples/tutorials > make ex2 > ./ex2 > > Wrt MSMPI - yes its free to download > > And PETSc does work with Intel-MPI. It might be a separate > download/install. [so I can't say if what you have is the correct install > of IntelMPI or not] > > Check the builds we use for testing - for ex: > config/examples/arch-ci-mswin-*.py > > Satish > > On Mon, 30 Mar 2020, Qin Lu wrote: > > > Hi Satish, > > The configure.log and RDict.log of Petsc-3.12.4 build is attached. > > Is the MSMPI free to use in Windows-10? > > Does Petsc support Intel-MPI? I have it in my machine, but for some > reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > subdirectory of it. > > Thanks a lot for your help.Qin > > On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > balay at mcs.anl.gov> wrote: > > > > MPICH is unsupported - and we haven't tested with it for a long time. > > > > And petsc-3.4.2 is from 2013 - and untested with current gen > os/compilers/libraries. > > > > Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > > We recommend 64bit MSMPI for windows. > > > > Satish > > > > On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > Hello, > > > I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > configuration/compilation/installation seem to finish without problem, but > test program (ex19) failed since it could not find a shared lib. Then I > linked the libpetsc.lib with my program (in Fortran-90), but it got run > time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > Petsc subroutines. Note that this package was built, tested and worked well > with the same Fortran-90 program in my Windows-7 workstation. > > > > > > Also tried Petsc-3.12.4 but got the same errors. > > > > > > The following is my configuration: > > > > > > > > > =============== > > > > > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > --with-blas-lapack-dir="/cygdrive/c/Program Files > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > =============== > > > > > > > > > The error message of running ex19 is: > > > > > > > > > ================= > > > > > > $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > test > > > > > > Running test examples to verify correct installation > > > > > > Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > and PETSC_ARCH=arch-win64-debug > > > > > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > MPI process > > > > > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > libraries: ?: cannot open shared object file: No such file or directory > > > > > > ================= > > > > > > > > > Thanks a lot for any suggestions. > > > > > > > > > Best Regards, > > > > > > Qin > > > > > > > > > > > > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Mar 30 21:18:34 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 30 Mar 2020 21:18:34 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> Message-ID: On Mon, 30 Mar 2020, Matthew Knepley wrote: > On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > > Hi, > > > > I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > --with-mpi-dir="/cygdrive/c/Program Files > > (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > use --with-mpi-include and --with-mpi-lib, still didn't work. The > > config.log is attached. > > > > The following is my configuration: > > =============== > > > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > --with-blas-lapack-dir="/cygdrive/c/Program Files > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > --with-mpi-include="/cygdrive/c/Program Files > > (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib" --with- > > mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > --with-xt=0 --with-shared-libraries=0 > > > > ============= > > > > Thanks for any suggestions. > > > We just cannot cope with spaces in paths. Can you use the shortened > contiguous name instead of "Program File"? Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions The way to get this is: (for example) balay at ps5 ~ $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe Satish > > Thanks, > > Matt > > > Regards, > > > > Qin > > > > > > > > > > > > On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > knepley at gmail.com> wrote: > > > > > > On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > petsc-users at mcs.anl.gov> wrote: > > > > Hi Satish, > > > > The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > MPICH2, probably because I have set the former's path in environment > > variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > with Intel-MPI. > > > > As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > my Fortran-90 program, do you have any idea what can be wrong? Can it be > > related to MPI? > > > > I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > the following output: > > > > ============ > > python ./arch-ci-mswin-intel.py > > Traceback (most recent call last): > > File "./arch-ci-mswin-intel.py", line 10, in > > import configure > > ImportError: No module named configure > > ============ > > > > > > You have to run those from $PETSC_DIR. > > > > Matt > > > > > > Thanks, > > Qin > > > > > > > > I will try to use Intel-MPI and see what will happen. > > > > Thanks, > > Qin > > > > On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > > Please preserve cc: to the list > > > > > shared libraries: disabled > > > > So PETSc is correctly built as static. > > > > > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > libraries: ?: cannot open shared object file: No such file or directory > > > > So its not clear which shared library this error is referring to. But then > > - this error was with petsc-3.4.2 > > > > You can always try to run the code manually without mpiexec - and see if > > that works. > > > > cd src/ksp/ksp/examples/tutorials > > make ex2 > > ./ex2 > > > > Wrt MSMPI - yes its free to download > > > > And PETSc does work with Intel-MPI. It might be a separate > > download/install. [so I can't say if what you have is the correct install > > of IntelMPI or not] > > > > Check the builds we use for testing - for ex: > > config/examples/arch-ci-mswin-*.py > > > > Satish > > > > On Mon, 30 Mar 2020, Qin Lu wrote: > > > > > Hi Satish, > > > The configure.log and RDict.log of Petsc-3.12.4 build is attached. > > > Is the MSMPI free to use in Windows-10? > > > Does Petsc support Intel-MPI? I have it in my machine, but for some > > reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > subdirectory of it. > > > Thanks a lot for your help.Qin > > > On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > > MPICH is unsupported - and we haven't tested with it for a long time. > > > > > > And petsc-3.4.2 is from 2013 - and untested with current gen > > os/compilers/libraries. > > > > > > Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > > > > We recommend 64bit MSMPI for windows. > > > > > > Satish > > > > > > On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > Hello, > > > > I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > configuration/compilation/installation seem to finish without problem, but > > test program (ex19) failed since it could not find a shared lib. Then I > > linked the libpetsc.lib with my program (in Fortran-90), but it got run > > time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > Petsc subroutines. Note that this package was built, tested and worked well > > with the same Fortran-90 program in my Windows-7 workstation. > > > > > > > > Also tried Petsc-3.12.4 but got the same errors. > > > > > > > > The following is my configuration: > > > > > > > > > > > > =============== > > > > > > > > ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > --with-blas-lapack-dir="/cygdrive/c/Program Files > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > =============== > > > > > > > > > > > > The error message of running ex19 is: > > > > > > > > > > > > ================= > > > > > > > > $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > test > > > > > > > > Running test examples to verify correct installation > > > > > > > > Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > and PETSC_ARCH=arch-win64-debug > > > > > > > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > MPI process > > > > > > > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > > > > C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > libraries: ?: cannot open shared object file: No such file or directory > > > > > > > > ================= > > > > > > > > > > > > Thanks a lot for any suggestions. > > > > > > > > > > > > Best Regards, > > > > > > > > Qin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > From jacob.fai at gmail.com Mon Mar 30 23:56:34 2020 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Mon, 30 Mar 2020 23:56:34 -0500 Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> Message-ID: >> We just cannot cope with spaces in paths. Can you use the shortened >> contiguous name instead of "Program File"? FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) Cell: (312) 694-3391 > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: >> >>> Hi, >>> >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using >>> --with-mpi-dir="/cygdrive/c/Program Files >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The >>> config.log is attached. >>> >>> The following is my configuration: >>> =============== >>> >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi >>> --with-blas-lapack-dir="/cygdrive/c/Program Files >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" >>> --with-mpi-include="/cygdrive/c/Program Files >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib" --with- >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 >>> --with-xt=0 --with-shared-libraries=0 >>> >>> ============= >>> >>> Thanks for any suggestions. >>> >> We just cannot cope with spaces in paths. Can you use the shortened >> contiguous name instead of "Program File"? > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > The way to get this is: (for example) > > balay at ps5 ~ > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > Satish > > > >> >> Thanks, >> >> Matt >> >>> Regards, >>> >>> Qin >>> >>> >>> >>> >>> >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >>> >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>> Hi Satish, >>> >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of >>> MPICH2, probably because I have set the former's path in environment >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc >>> with Intel-MPI. >>> >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be >>> related to MPI? >>> >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got >>> the following output: >>> >>> ============ >>> python ./arch-ci-mswin-intel.py >>> Traceback (most recent call last): >>> File "./arch-ci-mswin-intel.py", line 10, in >>> import configure >>> ImportError: No module named configure >>> ============ >>> >>> >>> You have to run those from $PETSC_DIR. >>> >>> Matt >>> >>> >>> Thanks, >>> Qin >>> >>> >>> >>> I will try to use Intel-MPI and see what will happen. >>> >>> Thanks, >>> Qin >>> >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < >>> balay at mcs.anl.gov> wrote: >>> >>> >>> Please preserve cc: to the list >>> >>>> shared libraries: disabled >>> >>> So PETSc is correctly built as static. >>> >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared >>> libraries: ?: cannot open shared object file: No such file or directory >>> >>> So its not clear which shared library this error is referring to. But then >>> - this error was with petsc-3.4.2 >>> >>> You can always try to run the code manually without mpiexec - and see if >>> that works. >>> >>> cd src/ksp/ksp/examples/tutorials >>> make ex2 >>> ./ex2 >>> >>> Wrt MSMPI - yes its free to download >>> >>> And PETSc does work with Intel-MPI. It might be a separate >>> download/install. [so I can't say if what you have is the correct install >>> of IntelMPI or not] >>> >>> Check the builds we use for testing - for ex: >>> config/examples/arch-ci-mswin-*.py >>> >>> Satish >>> >>> On Mon, 30 Mar 2020, Qin Lu wrote: >>> >>>> Hi Satish, >>>> The configure.log and RDict.log of Petsc-3.12.4 build is attached. >>>> Is the MSMPI free to use in Windows-10? >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include >>> subdirectory of it. >>>> Thanks a lot for your help.Qin >>>> On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < >>> balay at mcs.anl.gov> wrote: >>>> >>>> MPICH is unsupported - and we haven't tested with it for a long time. >>>> >>>> And petsc-3.4.2 is from 2013 - and untested with current gen >>> os/compilers/libraries. >>>> >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? >>>> >>>> We recommend 64bit MSMPI for windows. >>>> >>>> Satish >>>> >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: >>>> >>>>> Hello, >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The >>> configuration/compilation/installation seem to finish without problem, but >>> test program (ex19) failed since it could not find a shared lib. Then I >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other >>> Petsc subroutines. Note that this package was built, tested and worked well >>> with the same Fortran-90 program in my Windows-7 workstation. >>>>> >>>>> Also tried Petsc-3.12.4 but got the same errors. >>>>> >>>>> The following is my configuration: >>>>> >>>>> >>>>> =============== >>>>> >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit >>> --with-blas-lapack-dir="/cygdrive/c/Program Files >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 >>>>> >>>>> =============== >>>>> >>>>> >>>>> The error message of running ex19 is: >>>>> >>>>> >>>>> ================= >>>>> >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit >>> test >>>>> >>>>> Running test examples to verify correct installation >>>>> >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit >>> and PETSC_ARCH=arch-win64-debug >>>>> >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 >>> MPI process >>>>> >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>>> >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared >>> libraries: ?: cannot open shared object file: No such file or directory >>>>> >>>>> ================= >>>>> >>>>> >>>>> Thanks a lot for any suggestions. >>>>> >>>>> >>>>> Best Regards, >>>>> >>>>> Qin >>> >>>>> >>>>> >>>>> >>>>> >>>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 31 08:38:37 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 Mar 2020 08:38:37 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> Message-ID: On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > >> We just cannot cope with spaces in paths. Can you use the shortened > >> contiguous name instead of "Program File"? > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html Satish > > Best regards, > > Jacob Faibussowitsch > (Jacob Fai - booss - oh - vitch) > Cell: (312) 694-3391 > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > >> > >>> Hi, > >>> > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > >>> --with-mpi-dir="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > >>> config.log is attached. > >>> > >>> The following is my configuration: > >>> =============== > >>> > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > >>> --with-mpi-include="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib" --with- > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > >>> --with-xt=0 --with-shared-libraries=0 > >>> > >>> ============= > >>> > >>> Thanks for any suggestions. > >>> > >> We just cannot cope with spaces in paths. Can you use the shortened > >> contiguous name instead of "Program File"? > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > The way to get this is: (for example) > > > > balay at ps5 ~ > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > Satish > > > > > > > >> > >> Thanks, > >> > >> Matt > >> > >>> Regards, > >>> > >>> Qin > >>> > >>> > >>> > >>> > >>> > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > >>> knepley at gmail.com> wrote: > >>> > >>> > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > >>> petsc-users at mcs.anl.gov> wrote: > >>> > >>> Hi Satish, > >>> > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > >>> MPICH2, probably because I have set the former's path in environment > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > >>> with Intel-MPI. > >>> > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > >>> related to MPI? > >>> > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > >>> the following output: > >>> > >>> ============ > >>> python ./arch-ci-mswin-intel.py > >>> Traceback (most recent call last): > >>> File "./arch-ci-mswin-intel.py", line 10, in > >>> import configure > >>> ImportError: No module named configure > >>> ============ > >>> > >>> > >>> You have to run those from $PETSC_DIR. > >>> > >>> Matt > >>> > >>> > >>> Thanks, > >>> Qin > >>> > >>> > >>> > >>> I will try to use Intel-MPI and see what will happen. > >>> > >>> Thanks, > >>> Qin > >>> > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > >>> balay at mcs.anl.gov> wrote: > >>> > >>> > >>> Please preserve cc: to the list > >>> > >>>> shared libraries: disabled > >>> > >>> So PETSc is correctly built as static. > >>> > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > >>> libraries: ?: cannot open shared object file: No such file or directory > >>> > >>> So its not clear which shared library this error is referring to. But then > >>> - this error was with petsc-3.4.2 > >>> > >>> You can always try to run the code manually without mpiexec - and see if > >>> that works. > >>> > >>> cd src/ksp/ksp/examples/tutorials > >>> make ex2 > >>> ./ex2 > >>> > >>> Wrt MSMPI - yes its free to download > >>> > >>> And PETSc does work with Intel-MPI. It might be a separate > >>> download/install. [so I can't say if what you have is the correct install > >>> of IntelMPI or not] > >>> > >>> Check the builds we use for testing - for ex: > >>> config/examples/arch-ci-mswin-*.py > >>> > >>> Satish > >>> > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > >>> > >>>> Hi Satish, > >>>> The configure.log and RDict.log of Petsc-3.12.4 build is attached. > >>>> Is the MSMPI free to use in Windows-10? > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > >>> subdirectory of it. > >>>> Thanks a lot for your help.Qin > >>>> On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > >>> balay at mcs.anl.gov> wrote: > >>>> > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > >>>> > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > >>> os/compilers/libraries. > >>>> > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > >>>> > >>>> We recommend 64bit MSMPI for windows. > >>>> > >>>> Satish > >>>> > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > >>>> > >>>>> Hello, > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > >>> configuration/compilation/installation seem to finish without problem, but > >>> test program (ex19) failed since it could not find a shared lib. Then I > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > >>> Petsc subroutines. Note that this package was built, tested and worked well > >>> with the same Fortran-90 program in my Windows-7 workstation. > >>>>> > >>>>> Also tried Petsc-3.12.4 but got the same errors. > >>>>> > >>>>> The following is my configuration: > >>>>> > >>>>> > >>>>> =============== > >>>>> > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > >>>>> > >>>>> =============== > >>>>> > >>>>> > >>>>> The error message of running ex19 is: > >>>>> > >>>>> > >>>>> ================= > >>>>> > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > >>> test > >>>>> > >>>>> Running test examples to verify correct installation > >>>>> > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > >>> and PETSC_ARCH=arch-win64-debug > >>>>> > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > >>> MPI process > >>>>> > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > >>>>> > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > >>> libraries: ?: cannot open shared object file: No such file or directory > >>>>> > >>>>> ================= > >>>>> > >>>>> > >>>>> Thanks a lot for any suggestions. > >>>>> > >>>>> > >>>>> Best Regards, > >>>>> > >>>>> Qin > >>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>> > >>> > >>> > >>> -- > >>> What most experimenters take for granted before they begin their > >>> experiments is infinitely more interesting than any results to which their > >>> experiments lead. > >>> -- Norbert Wiener > >>> > >>> https://www.cse.buffalo.edu/~knepley/ > >>> > > > From lu_qin_2000 at yahoo.com Tue Mar 31 10:54:53 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Tue, 31 Mar 2020 15:54:53 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> Message-ID: <1699619984.1183207.1585670093716@mail.yahoo.com> Hello, I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. Do you any suggestions how to proceed? Thanks,Qin ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote: On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > >> We just cannot cope with spaces in paths. Can you use the shortened > >> contiguous name instead of "Program File"? > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html Satish > > Best regards, > > Jacob Faibussowitsch > (Jacob Fai - booss - oh - vitch) > Cell: (312) 694-3391 > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > >> > >>> Hi, > >>> > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > >>> --with-mpi-dir="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > >>> config.log is attached. > >>> > >>> The following is my configuration: > >>> =============== > >>> > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > >>> --with-mpi-include="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > >>> --with-xt=0 --with-shared-libraries=0 > >>> > >>> ============= > >>> > >>> Thanks for any suggestions. > >>> > >> We just cannot cope with spaces in paths. Can you use the shortened > >> contiguous name instead of "Program File"? > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > The way to get this is: (for example) > > > > balay at ps5 ~ > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > Satish > > > > > > > >> > >>? Thanks, > >> > >>? ? Matt > >> > >>> Regards, > >>> > >>> Qin > >>> > >>> > >>> > >>> > >>> > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > >>> knepley at gmail.com> wrote: > >>> > >>> > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > >>> petsc-users at mcs.anl.gov> wrote: > >>> > >>> Hi Satish, > >>> > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > >>> MPICH2, probably because I have set the former's path in environment > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > >>> with Intel-MPI. > >>> > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > >>> related to MPI? > >>> > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > >>> the following output: > >>> > >>> ============ > >>> python ./arch-ci-mswin-intel.py > >>> Traceback (most recent call last): > >>>? File "./arch-ci-mswin-intel.py", line 10, in > >>>? ? import configure > >>> ImportError: No module named configure > >>> ============ > >>> > >>> > >>> You have to run those from $PETSC_DIR. > >>> > >>>? Matt > >>> > >>> > >>> Thanks, > >>> Qin > >>> > >>> > >>> > >>> I will try to use Intel-MPI and see what will happen. > >>> > >>> Thanks, > >>> Qin > >>> > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > >>> balay at mcs.anl.gov> wrote: > >>> > >>> > >>> Please preserve cc: to the list > >>> > >>>> shared libraries: disabled > >>> > >>> So PETSc? is correctly built as static. > >>> > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > >>> libraries: ?: cannot open shared object file: No such file or directory > >>> > >>> So its not clear which shared library this error is referring to. But then > >>> - this error was with petsc-3.4.2 > >>> > >>> You can always try to run the code manually without mpiexec - and see if > >>> that works. > >>> > >>> cd src/ksp/ksp/examples/tutorials > >>> make ex2 > >>> ./ex2 > >>> > >>> Wrt MSMPI - yes its free to download > >>> > >>> And PETSc does work with Intel-MPI. It might be a separate > >>> download/install. [so I can't say if what you have is the correct install > >>> of IntelMPI or not] > >>> > >>> Check the builds we use for testing - for ex: > >>> config/examples/arch-ci-mswin-*.py > >>> > >>> Satish > >>> > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > >>> > >>>> Hi Satish, > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > >>>> Is the MSMPI free to use in Windows-10? > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > >>> subdirectory of it. > >>>> Thanks a lot for your help.Qin > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > >>> balay at mcs.anl.gov> wrote: > >>>> > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > >>>> > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > >>> os/compilers/libraries. > >>>> > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > >>>> > >>>> We recommend 64bit MSMPI for windows. > >>>> > >>>> Satish > >>>> > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > >>>> > >>>>> Hello, > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > >>> configuration/compilation/installation seem to finish without problem, but > >>> test program (ex19) failed since it could not find a shared lib. Then I > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > >>> Petsc subroutines. Note that this package was built, tested and worked well > >>> with the same Fortran-90 program in my Windows-7 workstation. > >>>>> > >>>>> Also tried Petsc-3.12.4 but got the same errors. > >>>>> > >>>>> The following is my configuration: > >>>>> > >>>>> > >>>>> =============== > >>>>> > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > >>>>> > >>>>> =============== > >>>>> > >>>>> > >>>>> The error message of running ex19 is: > >>>>> > >>>>> > >>>>> ================= > >>>>> > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > >>> test > >>>>> > >>>>> Running test examples to verify correct installation > >>>>> > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > >>> and PETSC_ARCH=arch-win64-debug > >>>>> > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > >>> MPI process > >>>>> > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > >>>>> > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > >>> libraries: ?: cannot open shared object file: No such file or directory > >>>>> > >>>>> ================= > >>>>> > >>>>> > >>>>> Thanks a lot for any suggestions. > >>>>> > >>>>> > >>>>> Best Regards, > >>>>> > >>>>> Qin > >>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>> > >>> > >>> > >>> -- > >>> What most experimenters take for granted before they begin their > >>> experiments is infinitely more interesting than any results to which their > >>> experiments lead. > >>> -- Norbert Wiener > >>> > >>> https://www.cse.buffalo.edu/~knepley/ > >>> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 31 11:01:47 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 Mar 2020 11:01:47 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <1699619984.1183207.1585670093716@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> Message-ID: Do PETSc examples that use KSPSetType() say src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? Its probably best to run your code in a debugger to determine the problem. [If your code can compile on linux - I'll also suggest running it with valgrind] Satish On Tue, 31 Mar 2020, Qin Lu wrote: > Hello, > I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. > > Do you any suggestions how to proceed? > Thanks,Qin > ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote: > > On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > >> contiguous name instead of "Program File"? > > > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. > > No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html > > Satish > > > > > Best regards, > > > > Jacob Faibussowitsch > > (Jacob Fai - booss - oh - vitch) > > Cell: (312) 694-3391 > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > >> > > >>> Hi, > > >>> > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > > >>> config.log is attached. > > >>> > > >>> The following is my configuration: > > >>> =============== > > >>> > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > >>> --with-mpi-include="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > >>> --with-xt=0 --with-shared-libraries=0 > > >>> > > >>> ============= > > >>> > > >>> Thanks for any suggestions. > > >>> > > >> We just cannot cope with spaces in paths. Can you use the shortened > > >> contiguous name instead of "Program File"? > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > > > The way to get this is: (for example) > > > > > > balay at ps5 ~ > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > Satish > > > > > > > > > > > >> > > >>? Thanks, > > >> > > >>? ? Matt > > >> > > >>> Regards, > > >>> > > >>> Qin > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > >>> knepley at gmail.com> wrote: > > >>> > > >>> > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > >>> petsc-users at mcs.anl.gov> wrote: > > >>> > > >>> Hi Satish, > > >>> > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > >>> MPICH2, probably because I have set the former's path in environment > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > >>> with Intel-MPI. > > >>> > > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > > >>> related to MPI? > > >>> > > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > >>> the following output: > > >>> > > >>> ============ > > >>> python ./arch-ci-mswin-intel.py > > >>> Traceback (most recent call last): > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > >>>? ? import configure > > >>> ImportError: No module named configure > > >>> ============ > > >>> > > >>> > > >>> You have to run those from $PETSC_DIR. > > >>> > > >>>? Matt > > >>> > > >>> > > >>> Thanks, > > >>> Qin > > >>> > > >>> > > >>> > > >>> I will try to use Intel-MPI and see what will happen. > > >>> > > >>> Thanks, > > >>> Qin > > >>> > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > >>> balay at mcs.anl.gov> wrote: > > >>> > > >>> > > >>> Please preserve cc: to the list > > >>> > > >>>> shared libraries: disabled > > >>> > > >>> So PETSc? is correctly built as static. > > >>> > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > >>> libraries: ?: cannot open shared object file: No such file or directory > > >>> > > >>> So its not clear which shared library this error is referring to. But then > > >>> - this error was with petsc-3.4.2 > > >>> > > >>> You can always try to run the code manually without mpiexec - and see if > > >>> that works. > > >>> > > >>> cd src/ksp/ksp/examples/tutorials > > >>> make ex2 > > >>> ./ex2 > > >>> > > >>> Wrt MSMPI - yes its free to download > > >>> > > >>> And PETSc does work with Intel-MPI. It might be a separate > > >>> download/install. [so I can't say if what you have is the correct install > > >>> of IntelMPI or not] > > >>> > > >>> Check the builds we use for testing - for ex: > > >>> config/examples/arch-ci-mswin-*.py > > >>> > > >>> Satish > > >>> > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > >>> > > >>>> Hi Satish, > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > > >>>> Is the MSMPI free to use in Windows-10? > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > >>> subdirectory of it. > > >>>> Thanks a lot for your help.Qin > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > >>> balay at mcs.anl.gov> wrote: > > >>>> > > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > > >>>> > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > >>> os/compilers/libraries. > > >>>> > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > >>>> > > >>>> We recommend 64bit MSMPI for windows. > > >>>> > > >>>> Satish > > >>>> > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > >>>> > > >>>>> Hello, > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > >>> configuration/compilation/installation seem to finish without problem, but > > >>> test program (ex19) failed since it could not find a shared lib. Then I > > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > >>> Petsc subroutines. Note that this package was built, tested and worked well > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > >>>>> > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > >>>>> > > >>>>> The following is my configuration: > > >>>>> > > >>>>> > > >>>>> =============== > > >>>>> > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > >>>>> > > >>>>> =============== > > >>>>> > > >>>>> > > >>>>> The error message of running ex19 is: > > >>>>> > > >>>>> > > >>>>> ================= > > >>>>> > > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > >>> test > > >>>>> > > >>>>> Running test examples to verify correct installation > > >>>>> > > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > >>> and PETSC_ARCH=arch-win64-debug > > >>>>> > > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > >>> MPI process > > >>>>> > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > >>>>> > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > >>> libraries: ?: cannot open shared object file: No such file or directory > > >>>>> > > >>>>> ================= > > >>>>> > > >>>>> > > >>>>> Thanks a lot for any suggestions. > > >>>>> > > >>>>> > > >>>>> Best Regards, > > >>>>> > > >>>>> Qin > > >>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> > > >>> > > >>> > > >>> > > >>> -- > > >>> What most experimenters take for granted before they begin their > > >>> experiments is infinitely more interesting than any results to which their > > >>> experiments lead. > > >>> -- Norbert Wiener > > >>> > > >>> https://www.cse.buffalo.edu/~knepley/ > > >>> > > > > > > From lu_qin_2000 at yahoo.com Tue Mar 31 11:19:44 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Tue, 31 Mar 2020 16:19:44 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> Message-ID: <1030007059.1152545.1585671584911@mail.yahoo.com> In the MS Visual Studio debugger, I can see there are 2 calls before?KSPSetType: call PetscInitialize(PETSC_NULL_CHARACTER,ierr) ? call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) It turns out KSPCreate returns ierr=1, so it is the first Petsc call that got error. My program in Linux (also built with Intel compilers 2018) works without problem. Thanks, Qin On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay wrote: Do PETSc examples that use KSPSetType() say src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? Its probably best to run your code in a debugger to determine the problem. [If your code can compile on linux - I'll also suggest running it with valgrind] Satish On Tue, 31 Mar 2020, Qin Lu wrote: >? Hello, > I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. > > Do you any suggestions how to proceed? > Thanks,Qin > ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote:? >? >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > >> contiguous name instead of "Program File"? > > > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. > > No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html > > Satish > > > > > Best regards, > > > > Jacob Faibussowitsch > > (Jacob Fai - booss - oh - vitch) > > Cell: (312) 694-3391 > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > >> > > >>> Hi, > > >>> > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > > >>> config.log is attached. > > >>> > > >>> The following is my configuration: > > >>> =============== > > >>> > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > >>> --with-mpi-include="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > >>> --with-xt=0 --with-shared-libraries=0 > > >>> > > >>> ============= > > >>> > > >>> Thanks for any suggestions. > > >>> > > >> We just cannot cope with spaces in paths. Can you use the shortened > > >> contiguous name instead of "Program File"? > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > > > The way to get this is: (for example) > > > > > > balay at ps5 ~ > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > Satish > > > > > > > > > > > >> > > >>? Thanks, > > >> > > >>? ? Matt > > >> > > >>> Regards, > > >>> > > >>> Qin > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > >>> knepley at gmail.com> wrote: > > >>> > > >>> > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > >>> petsc-users at mcs.anl.gov> wrote: > > >>> > > >>> Hi Satish, > > >>> > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > >>> MPICH2, probably because I have set the former's path in environment > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > >>> with Intel-MPI. > > >>> > > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > > >>> related to MPI? > > >>> > > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > >>> the following output: > > >>> > > >>> ============ > > >>> python ./arch-ci-mswin-intel.py > > >>> Traceback (most recent call last): > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > >>>? ? import configure > > >>> ImportError: No module named configure > > >>> ============ > > >>> > > >>> > > >>> You have to run those from $PETSC_DIR. > > >>> > > >>>? Matt > > >>> > > >>> > > >>> Thanks, > > >>> Qin > > >>> > > >>> > > >>> > > >>> I will try to use Intel-MPI and see what will happen. > > >>> > > >>> Thanks, > > >>> Qin > > >>> > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > >>> balay at mcs.anl.gov> wrote: > > >>> > > >>> > > >>> Please preserve cc: to the list > > >>> > > >>>> shared libraries: disabled > > >>> > > >>> So PETSc? is correctly built as static. > > >>> > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > >>> libraries: ?: cannot open shared object file: No such file or directory > > >>> > > >>> So its not clear which shared library this error is referring to. But then > > >>> - this error was with petsc-3.4.2 > > >>> > > >>> You can always try to run the code manually without mpiexec - and see if > > >>> that works. > > >>> > > >>> cd src/ksp/ksp/examples/tutorials > > >>> make ex2 > > >>> ./ex2 > > >>> > > >>> Wrt MSMPI - yes its free to download > > >>> > > >>> And PETSc does work with Intel-MPI. It might be a separate > > >>> download/install. [so I can't say if what you have is the correct install > > >>> of IntelMPI or not] > > >>> > > >>> Check the builds we use for testing - for ex: > > >>> config/examples/arch-ci-mswin-*.py > > >>> > > >>> Satish > > >>> > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > >>> > > >>>> Hi Satish, > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > > >>>> Is the MSMPI free to use in Windows-10? > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > >>> subdirectory of it. > > >>>> Thanks a lot for your help.Qin > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > >>> balay at mcs.anl.gov> wrote: > > >>>> > > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > > >>>> > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > >>> os/compilers/libraries. > > >>>> > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > >>>> > > >>>> We recommend 64bit MSMPI for windows. > > >>>> > > >>>> Satish > > >>>> > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > >>>> > > >>>>> Hello, > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > >>> configuration/compilation/installation seem to finish without problem, but > > >>> test program (ex19) failed since it could not find a shared lib. Then I > > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > >>> Petsc subroutines. Note that this package was built, tested and worked well > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > >>>>> > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > >>>>> > > >>>>> The following is my configuration: > > >>>>> > > >>>>> > > >>>>> =============== > > >>>>> > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > >>>>> > > >>>>> =============== > > >>>>> > > >>>>> > > >>>>> The error message of running ex19 is: > > >>>>> > > >>>>> > > >>>>> ================= > > >>>>> > > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > >>> test > > >>>>> > > >>>>> Running test examples to verify correct installation > > >>>>> > > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > >>> and PETSC_ARCH=arch-win64-debug > > >>>>> > > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > >>> MPI process > > >>>>> > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > >>>>> > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > >>> libraries: ?: cannot open shared object file: No such file or directory > > >>>>> > > >>>>> ================= > > >>>>> > > >>>>> > > >>>>> Thanks a lot for any suggestions. > > >>>>> > > >>>>> > > >>>>> Best Regards, > > >>>>> > > >>>>> Qin > > >>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> > > >>> > > >>> > > >>> > > >>> -- > > >>> What most experimenters take for granted before they begin their > > >>> experiments is infinitely more interesting than any results to which their > > >>> experiments lead. > > >>> -- Norbert Wiener > > >>> > > >>> https://www.cse.buffalo.edu/~knepley/ > > >>> > > > > > >? -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 31 11:46:26 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 Mar 2020 11:46:26 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <1030007059.1152545.1585671584911@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> Message-ID: Try PETSc examples with KSPCreate() - do they run correctly? How do you build your code - do you use petsc formatted makefile? Look for differences. Also run your code in valgrind on linux. Or you need to debug further on windows.. Satish On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > In the MS Visual Studio debugger, I can see there are 2 calls before?KSPSetType: > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) ? > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call that got error. > > My program in Linux (also built with Intel compilers 2018) works without problem. > > Thanks, > > Qin > > > > > > On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay wrote: > > Do PETSc examples that use KSPSetType() say src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > Its probably best to run your code in a debugger to determine the problem. > > [If your code can compile on linux - I'll also suggest running it with valgrind] > > Satish > > On Tue, 31 Mar 2020, Qin Lu wrote: > > >? Hello, > > I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. > > > > Do you any suggestions how to proceed? > > Thanks,Qin > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote:? > >? > >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > >> contiguous name instead of "Program File"? > > > > > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. > > > > No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > Satish > > > > > > > > Best regards, > > > > > > Jacob Faibussowitsch > > > (Jacob Fai - booss - oh - vitch) > > > Cell: (312) 694-3391 > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > > >> > > > >>> Hi, > > > >>> > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > > > >>> config.log is attached. > > > >>> > > > >>> The following is my configuration: > > > >>> =============== > > > >>> > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > > >>> --with-xt=0 --with-shared-libraries=0 > > > >>> > > > >>> ============= > > > >>> > > > >>> Thanks for any suggestions. > > > >>> > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > > > > > The way to get this is: (for example) > > > > > > > > balay at ps5 ~ > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > Satish > > > > > > > > > > > > > > > >> > > > >>? Thanks, > > > >> > > > >>? ? Matt > > > >> > > > >>> Regards, > > > >>> > > > >>> Qin > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > >>> knepley at gmail.com> wrote: > > > >>> > > > >>> > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > >>> petsc-users at mcs.anl.gov> wrote: > > > >>> > > > >>> Hi Satish, > > > >>> > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > > >>> MPICH2, probably because I have set the former's path in environment > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > > >>> with Intel-MPI. > > > >>> > > > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > > > >>> related to MPI? > > > >>> > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > > >>> the following output: > > > >>> > > > >>> ============ > > > >>> python ./arch-ci-mswin-intel.py > > > >>> Traceback (most recent call last): > > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > > >>>? ? import configure > > > >>> ImportError: No module named configure > > > >>> ============ > > > >>> > > > >>> > > > >>> You have to run those from $PETSC_DIR. > > > >>> > > > >>>? Matt > > > >>> > > > >>> > > > >>> Thanks, > > > >>> Qin > > > >>> > > > >>> > > > >>> > > > >>> I will try to use Intel-MPI and see what will happen. > > > >>> > > > >>> Thanks, > > > >>> Qin > > > >>> > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > >>> balay at mcs.anl.gov> wrote: > > > >>> > > > >>> > > > >>> Please preserve cc: to the list > > > >>> > > > >>>> shared libraries: disabled > > > >>> > > > >>> So PETSc? is correctly built as static. > > > >>> > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > >>> > > > >>> So its not clear which shared library this error is referring to. But then > > > >>> - this error was with petsc-3.4.2 > > > >>> > > > >>> You can always try to run the code manually without mpiexec - and see if > > > >>> that works. > > > >>> > > > >>> cd src/ksp/ksp/examples/tutorials > > > >>> make ex2 > > > >>> ./ex2 > > > >>> > > > >>> Wrt MSMPI - yes its free to download > > > >>> > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > >>> download/install. [so I can't say if what you have is the correct install > > > >>> of IntelMPI or not] > > > >>> > > > >>> Check the builds we use for testing - for ex: > > > >>> config/examples/arch-ci-mswin-*.py > > > >>> > > > >>> Satish > > > >>> > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > >>> > > > >>>> Hi Satish, > > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > > > >>>> Is the MSMPI free to use in Windows-10? > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > > > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > > >>> subdirectory of it. > > > >>>> Thanks a lot for your help.Qin > > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > >>> balay at mcs.anl.gov> wrote: > > > >>>> > > > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > > > >>>> > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > >>> os/compilers/libraries. > > > >>>> > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > >>>> > > > >>>> We recommend 64bit MSMPI for windows. > > > >>>> > > > >>>> Satish > > > >>>> > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > >>>> > > > >>>>> Hello, > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > >>> configuration/compilation/installation seem to finish without problem, but > > > >>> test program (ex19) failed since it could not find a shared lib. Then I > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > > > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > >>> Petsc subroutines. Note that this package was built, tested and worked well > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > >>>>> > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > >>>>> > > > >>>>> The following is my configuration: > > > >>>>> > > > >>>>> > > > >>>>> =============== > > > >>>>> > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > >>>>> > > > >>>>> =============== > > > >>>>> > > > >>>>> > > > >>>>> The error message of running ex19 is: > > > >>>>> > > > >>>>> > > > >>>>> ================= > > > >>>>> > > > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > >>> test > > > >>>>> > > > >>>>> Running test examples to verify correct installation > > > >>>>> > > > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > >>> and PETSC_ARCH=arch-win64-debug > > > >>>>> > > > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > > >>> MPI process > > > >>>>> > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > >>>>> > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > >>>>> > > > >>>>> ================= > > > >>>>> > > > >>>>> > > > >>>>> Thanks a lot for any suggestions. > > > >>>>> > > > >>>>> > > > >>>>> Best Regards, > > > >>>>> > > > >>>>> Qin > > > >>> > > > >>>>> > > > >>>>> > > > >>>>> > > > >>>>> > > > >>>>> > > > >>> > > > >>> > > > >>> > > > >>> -- > > > >>> What most experimenters take for granted before they begin their > > > >>> experiments is infinitely more interesting than any results to which their > > > >>> experiments lead. > > > >>> -- Norbert Wiener > > > >>> > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > >>> > > > > > > > > >? > From balay at mcs.anl.gov Tue Mar 31 11:51:37 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 Mar 2020 11:51:37 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> Message-ID: And use 'CHKERRA(ierr)' in your code to catch such failures early. Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 >>>>>>>>> call PetscInitialize(PETSC_NULL_CHARACTER,ierr) if (ierr /= 0) then write(6,*)'Unable to initialize PETSc' stop endif call PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) CHKERRA(ierr) <<<<<< etc.. Satish On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > Try PETSc examples with KSPCreate() - do they run correctly? > > How do you build your code - do you use petsc formatted makefile? > > Look for differences. Also run your code in valgrind on linux. Or you need to debug further on windows.. > > Satish > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > In the MS Visual Studio debugger, I can see there are 2 calls before?KSPSetType: > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) ? > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call that got error. > > > > My program in Linux (also built with Intel compilers 2018) works without problem. > > > > Thanks, > > > > Qin > > > > > > > > > > > > On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay wrote: > > > > Do PETSc examples that use KSPSetType() say src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > Its probably best to run your code in a debugger to determine the problem. > > > > [If your code can compile on linux - I'll also suggest running it with valgrind] > > > > Satish > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > >? Hello, > > > I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. > > > > > > Do you any suggestions how to proceed? > > > Thanks,Qin > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote:? > > >? > > >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > >> contiguous name instead of "Program File"? > > > > > > > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. > > > > > > No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > Satish > > > > > > > > > > > Best regards, > > > > > > > > Jacob Faibussowitsch > > > > (Jacob Fai - booss - oh - vitch) > > > > Cell: (312) 694-3391 > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > > > >> > > > > >>> Hi, > > > > >>> > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > > > > >>> config.log is attached. > > > > >>> > > > > >>> The following is my configuration: > > > > >>> =============== > > > > >>> > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > > > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > >>> > > > > >>> ============= > > > > >>> > > > > >>> Thanks for any suggestions. > > > > >>> > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > balay at ps5 ~ > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > >> > > > > >>? Thanks, > > > > >> > > > > >>? ? Matt > > > > >> > > > > >>> Regards, > > > > >>> > > > > >>> Qin > > > > >>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > >>> knepley at gmail.com> wrote: > > > > >>> > > > > >>> > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > >>> > > > > >>> Hi Satish, > > > > >>> > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > > > >>> MPICH2, probably because I have set the former's path in environment > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > > > >>> with Intel-MPI. > > > > >>> > > > > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > > > > >>> related to MPI? > > > > >>> > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > > > >>> the following output: > > > > >>> > > > > >>> ============ > > > > >>> python ./arch-ci-mswin-intel.py > > > > >>> Traceback (most recent call last): > > > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > > > >>>? ? import configure > > > > >>> ImportError: No module named configure > > > > >>> ============ > > > > >>> > > > > >>> > > > > >>> You have to run those from $PETSC_DIR. > > > > >>> > > > > >>>? Matt > > > > >>> > > > > >>> > > > > >>> Thanks, > > > > >>> Qin > > > > >>> > > > > >>> > > > > >>> > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > >>> > > > > >>> Thanks, > > > > >>> Qin > > > > >>> > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > >>> balay at mcs.anl.gov> wrote: > > > > >>> > > > > >>> > > > > >>> Please preserve cc: to the list > > > > >>> > > > > >>>> shared libraries: disabled > > > > >>> > > > > >>> So PETSc? is correctly built as static. > > > > >>> > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > >>> > > > > >>> So its not clear which shared library this error is referring to. But then > > > > >>> - this error was with petsc-3.4.2 > > > > >>> > > > > >>> You can always try to run the code manually without mpiexec - and see if > > > > >>> that works. > > > > >>> > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > >>> make ex2 > > > > >>> ./ex2 > > > > >>> > > > > >>> Wrt MSMPI - yes its free to download > > > > >>> > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > >>> download/install. [so I can't say if what you have is the correct install > > > > >>> of IntelMPI or not] > > > > >>> > > > > >>> Check the builds we use for testing - for ex: > > > > >>> config/examples/arch-ci-mswin-*.py > > > > >>> > > > > >>> Satish > > > > >>> > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > >>> > > > > >>>> Hi Satish, > > > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > > > > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > > > >>> subdirectory of it. > > > > >>>> Thanks a lot for your help.Qin > > > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > >>> balay at mcs.anl.gov> wrote: > > > > >>>> > > > > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > > > > >>>> > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > >>> os/compilers/libraries. > > > > >>>> > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > > >>>> > > > > >>>> We recommend 64bit MSMPI for windows. > > > > >>>> > > > > >>>> Satish > > > > >>>> > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > >>>> > > > > >>>>> Hello, > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > >>> configuration/compilation/installation seem to finish without problem, but > > > > >>> test program (ex19) failed since it could not find a shared lib. Then I > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > > > > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > >>> Petsc subroutines. Note that this package was built, tested and worked well > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > >>>>> > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > >>>>> > > > > >>>>> The following is my configuration: > > > > >>>>> > > > > >>>>> > > > > >>>>> =============== > > > > >>>>> > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > >>>>> > > > > >>>>> =============== > > > > >>>>> > > > > >>>>> > > > > >>>>> The error message of running ex19 is: > > > > >>>>> > > > > >>>>> > > > > >>>>> ================= > > > > >>>>> > > > > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > >>> test > > > > >>>>> > > > > >>>>> Running test examples to verify correct installation > > > > >>>>> > > > > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > >>>>> > > > > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > > > >>> MPI process > > > > >>>>> > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > >>>>> > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > >>>>> > > > > >>>>> ================= > > > > >>>>> > > > > >>>>> > > > > >>>>> Thanks a lot for any suggestions. > > > > >>>>> > > > > >>>>> > > > > >>>>> Best Regards, > > > > >>>>> > > > > >>>>> Qin > > > > >>> > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> -- > > > > >>> What most experimenters take for granted before they begin their > > > > >>> experiments is infinitely more interesting than any results to which their > > > > >>> experiments lead. > > > > >>> -- Norbert Wiener > > > > >>> > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > >>> > > > > > > > > > > > >? > > > From lu_qin_2000 at yahoo.com Tue Mar 31 12:09:52 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Tue, 31 Mar 2020 17:09:52 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> Message-ID: <1128094094.1212697.1585674592663@mail.yahoo.com> I built and tested ex1f.F90 and ex2f.F90, both call?KSPCreate(), both work well. My program is built using either MS Visual Studio or my own makefile. Are there any special compilation/link options required for my program in order to link with Petsc lib in Win-10? Thanks,Qin On Tuesday, March 31, 2020, 11:51:43 AM CDT, Satish Balay wrote: And use 'CHKERRA(ierr)' in your code to catch such failures early. Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 >>>>>>>>> ? ? ? call PetscInitialize(PETSC_NULL_CHARACTER,ierr) ? ? ? if (ierr /= 0) then ? ? ? ? write(6,*)'Unable to initialize PETSc' ? ? ? ? stop ? ? ? endif ? ? ? ? ? ? call PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) ? ? ? CHKERRA(ierr) <<<<<< etc.. Satish On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > Try PETSc examples with KSPCreate() - do they run correctly? > > How do you build your code - do you use petsc formatted makefile? > > Look for differences. Also run your code in valgrind on linux. Or you need to debug further on windows.. > > Satish > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > In the MS Visual Studio debugger, I can see there are 2 calls before?KSPSetType: > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) ? > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call that got error. > > > > My program in Linux (also built with Intel compilers 2018) works without problem. > > > > Thanks, > > > > Qin > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay wrote:? > >? > >? Do PETSc examples that use KSPSetType() say src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > Its probably best to run your code in a debugger to determine the problem. > > > > [If your code can compile on linux - I'll also suggest running it with valgrind] > > > > Satish > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > >? Hello, > > > I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. > > > > > > Do you any suggestions how to proceed? > > > Thanks,Qin > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote:? > > >? > > >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > >> contiguous name instead of "Program File"? > > > > > > > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. > > > > > > No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > Satish > > > > > > > > > > > Best regards, > > > > > > > > Jacob Faibussowitsch > > > > (Jacob Fai - booss - oh - vitch) > > > > Cell: (312) 694-3391 > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > > > >> > > > > >>> Hi, > > > > >>> > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > > > > >>> config.log is attached. > > > > >>> > > > > >>> The following is my configuration: > > > > >>> =============== > > > > >>> > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > > > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > >>> > > > > >>> ============= > > > > >>> > > > > >>> Thanks for any suggestions. > > > > >>> > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > balay at ps5 ~ > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > >> > > > > >>? Thanks, > > > > >> > > > > >>? ? Matt > > > > >> > > > > >>> Regards, > > > > >>> > > > > >>> Qin > > > > >>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > >>> knepley at gmail.com> wrote: > > > > >>> > > > > >>> > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > >>> > > > > >>> Hi Satish, > > > > >>> > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > > > >>> MPICH2, probably because I have set the former's path in environment > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > > > >>> with Intel-MPI. > > > > >>> > > > > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > > > > >>> related to MPI? > > > > >>> > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > > > >>> the following output: > > > > >>> > > > > >>> ============ > > > > >>> python ./arch-ci-mswin-intel.py > > > > >>> Traceback (most recent call last): > > > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > > > >>>? ? import configure > > > > >>> ImportError: No module named configure > > > > >>> ============ > > > > >>> > > > > >>> > > > > >>> You have to run those from $PETSC_DIR. > > > > >>> > > > > >>>? Matt > > > > >>> > > > > >>> > > > > >>> Thanks, > > > > >>> Qin > > > > >>> > > > > >>> > > > > >>> > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > >>> > > > > >>> Thanks, > > > > >>> Qin > > > > >>> > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > >>> balay at mcs.anl.gov> wrote: > > > > >>> > > > > >>> > > > > >>> Please preserve cc: to the list > > > > >>> > > > > >>>> shared libraries: disabled > > > > >>> > > > > >>> So PETSc? is correctly built as static. > > > > >>> > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > >>> > > > > >>> So its not clear which shared library this error is referring to. But then > > > > >>> - this error was with petsc-3.4.2 > > > > >>> > > > > >>> You can always try to run the code manually without mpiexec - and see if > > > > >>> that works. > > > > >>> > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > >>> make ex2 > > > > >>> ./ex2 > > > > >>> > > > > >>> Wrt MSMPI - yes its free to download > > > > >>> > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > >>> download/install. [so I can't say if what you have is the correct install > > > > >>> of IntelMPI or not] > > > > >>> > > > > >>> Check the builds we use for testing - for ex: > > > > >>> config/examples/arch-ci-mswin-*.py > > > > >>> > > > > >>> Satish > > > > >>> > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > >>> > > > > >>>> Hi Satish, > > > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > > > > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > > > >>> subdirectory of it. > > > > >>>> Thanks a lot for your help.Qin > > > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > >>> balay at mcs.anl.gov> wrote: > > > > >>>> > > > > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > > > > >>>> > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > >>> os/compilers/libraries. > > > > >>>> > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > > >>>> > > > > >>>> We recommend 64bit MSMPI for windows. > > > > >>>> > > > > >>>> Satish > > > > >>>> > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > >>>> > > > > >>>>> Hello, > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > >>> configuration/compilation/installation seem to finish without problem, but > > > > >>> test program (ex19) failed since it could not find a shared lib. Then I > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > > > > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > >>> Petsc subroutines. Note that this package was built, tested and worked well > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > >>>>> > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > >>>>> > > > > >>>>> The following is my configuration: > > > > >>>>> > > > > >>>>> > > > > >>>>> =============== > > > > >>>>> > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > >>>>> > > > > >>>>> =============== > > > > >>>>> > > > > >>>>> > > > > >>>>> The error message of running ex19 is: > > > > >>>>> > > > > >>>>> > > > > >>>>> ================= > > > > >>>>> > > > > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > >>> test > > > > >>>>> > > > > >>>>> Running test examples to verify correct installation > > > > >>>>> > > > > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > >>>>> > > > > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > > > >>> MPI process > > > > >>>>> > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > >>>>> > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > >>>>> > > > > >>>>> ================= > > > > >>>>> > > > > >>>>> > > > > >>>>> Thanks a lot for any suggestions. > > > > >>>>> > > > > >>>>> > > > > >>>>> Best Regards, > > > > >>>>> > > > > >>>>> Qin > > > > >>> > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> -- > > > > >>> What most experimenters take for granted before they begin their > > > > >>> experiments is infinitely more interesting than any results to which their > > > > >>> experiments lead. > > > > >>> -- Norbert Wiener > > > > >>> > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > >>> > > > > > > > > > > > >? > >? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 31 12:16:29 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 Mar 2020 12:16:29 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <1128094094.1212697.1585674592663@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> <1128094094.1212697.1585674592663@mail.yahoo.com> Message-ID: Is your code a single source file? multiple sourcefiles in a single dir? any external dependencies other than petsc? If possible - try compiling your code with petsc makefile. Does the code run correctly this way? Satish On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > I built and tested ex1f.F90 and ex2f.F90, both call?KSPCreate(), both work well. > My program is built using either MS Visual Studio or my own makefile. Are there any special compilation/link options required for my program in order to link with Petsc lib in Win-10? > Thanks,Qin > On Tuesday, March 31, 2020, 11:51:43 AM CDT, Satish Balay wrote: > > And use 'CHKERRA(ierr)' in your code to catch such failures early. > > Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 > > >>>>>>>>> > ? ? ? call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > ? ? ? if (ierr /= 0) then > ? ? ? ? write(6,*)'Unable to initialize PETSc' > ? ? ? ? stop > ? ? ? endif > ? ? ? > ? ? ? call PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) > ? ? ? CHKERRA(ierr) > <<<<<< > > etc.. > > Satish > > On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > > > Try PETSc examples with KSPCreate() - do they run correctly? > > > > How do you build your code - do you use petsc formatted makefile? > > > > Look for differences. Also run your code in valgrind on linux. Or you need to debug further on windows.. > > > > Satish > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > > In the MS Visual Studio debugger, I can see there are 2 calls before?KSPSetType: > > > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) ? > > > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call that got error. > > > > > > My program in Linux (also built with Intel compilers 2018) works without problem. > > > > > > Thanks, > > > > > > Qin > > > > > > > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay wrote:? > > >? > > >? Do PETSc examples that use KSPSetType() say src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > > > Its probably best to run your code in a debugger to determine the problem. > > > > > > [If your code can compile on linux - I'll also suggest running it with valgrind] > > > > > > Satish > > > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > > > >? Hello, > > > > I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. > > > > > > > > Do you any suggestions how to proceed? > > > > Thanks,Qin > > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote:? > > > >? > > > >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. > > > > > > > > No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > > > Satish > > > > > > > > > > > > > > Best regards, > > > > > > > > > > Jacob Faibussowitsch > > > > > (Jacob Fai - booss - oh - vitch) > > > > > Cell: (312) 694-3391 > > > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > > > > >> > > > > > >>> Hi, > > > > > >>> > > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > > > > > >>> config.log is attached. > > > > > >>> > > > > > >>> The following is my configuration: > > > > > >>> =============== > > > > > >>> > > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > > > > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > > >>> > > > > > >>> ============= > > > > > >>> > > > > > >>> Thanks for any suggestions. > > > > > >>> > > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > > > balay at ps5 ~ > > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > >>? Thanks, > > > > > >> > > > > > >>? ? Matt > > > > > >> > > > > > >>> Regards, > > > > > >>> > > > > > >>> Qin > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > > >>> knepley at gmail.com> wrote: > > > > > >>> > > > > > >>> > > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > > >>> > > > > > >>> Hi Satish, > > > > > >>> > > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > > > > >>> MPICH2, probably because I have set the former's path in environment > > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > > > > >>> with Intel-MPI. > > > > > >>> > > > > > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > > > > > >>> related to MPI? > > > > > >>> > > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > > > > >>> the following output: > > > > > >>> > > > > > >>> ============ > > > > > >>> python ./arch-ci-mswin-intel.py > > > > > >>> Traceback (most recent call last): > > > > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > > > > >>>? ? import configure > > > > > >>> ImportError: No module named configure > > > > > >>> ============ > > > > > >>> > > > > > >>> > > > > > >>> You have to run those from $PETSC_DIR. > > > > > >>> > > > > > >>>? Matt > > > > > >>> > > > > > >>> > > > > > >>> Thanks, > > > > > >>> Qin > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > > >>> > > > > > >>> Thanks, > > > > > >>> Qin > > > > > >>> > > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > >>> > > > > > >>> > > > > > >>> Please preserve cc: to the list > > > > > >>> > > > > > >>>> shared libraries: disabled > > > > > >>> > > > > > >>> So PETSc? is correctly built as static. > > > > > >>> > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > > >>> > > > > > >>> So its not clear which shared library this error is referring to. But then > > > > > >>> - this error was with petsc-3.4.2 > > > > > >>> > > > > > >>> You can always try to run the code manually without mpiexec - and see if > > > > > >>> that works. > > > > > >>> > > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > > >>> make ex2 > > > > > >>> ./ex2 > > > > > >>> > > > > > >>> Wrt MSMPI - yes its free to download > > > > > >>> > > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > > >>> download/install. [so I can't say if what you have is the correct install > > > > > >>> of IntelMPI or not] > > > > > >>> > > > > > >>> Check the builds we use for testing - for ex: > > > > > >>> config/examples/arch-ci-mswin-*.py > > > > > >>> > > > > > >>> Satish > > > > > >>> > > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > > >>> > > > > > >>>> Hi Satish, > > > > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > > > > > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > > > > >>> subdirectory of it. > > > > > >>>> Thanks a lot for your help.Qin > > > > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > >>>> > > > > > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > > > > > >>>> > > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > > >>> os/compilers/libraries. > > > > > >>>> > > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > > > >>>> > > > > > >>>> We recommend 64bit MSMPI for windows. > > > > > >>>> > > > > > >>>> Satish > > > > > >>>> > > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > >>>> > > > > > >>>>> Hello, > > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > > >>> configuration/compilation/installation seem to finish without problem, but > > > > > >>> test program (ex19) failed since it could not find a shared lib. Then I > > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > > > > > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > > >>> Petsc subroutines. Note that this package was built, tested and worked well > > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > > >>>>> > > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > > >>>>> > > > > > >>>>> The following is my configuration: > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> =============== > > > > > >>>>> > > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > >>>>> > > > > > >>>>> =============== > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> The error message of running ex19 is: > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> ================= > > > > > >>>>> > > > > > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > >>> test > > > > > >>>>> > > > > > >>>>> Running test examples to verify correct installation > > > > > >>>>> > > > > > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > > >>>>> > > > > > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > > > > >>> MPI process > > > > > >>>>> > > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > >>>>> > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > > >>>>> > > > > > >>>>> ================= > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> Thanks a lot for any suggestions. > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> Best Regards, > > > > > >>>>> > > > > > >>>>> Qin > > > > > >>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> -- > > > > > >>> What most experimenters take for granted before they begin their > > > > > >>> experiments is infinitely more interesting than any results to which their > > > > > >>> experiments lead. > > > > > >>> -- Norbert Wiener > > > > > >>> > > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > > >>> > > > > > > > > > > > > > > >? > > >? > > > From lu_qin_2000 at yahoo.com Tue Mar 31 13:42:13 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Tue, 31 Mar 2020 18:42:13 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> <1128094094.1212697.1585674592663@mail.yah oo.com> Message-ID: <216903193.1249363.1585680133244@mail.yahoo.com> My program has multiple files in a single directory, and there are some other dependencies. Are you talking about /petsc-3.12.4/makefile? Is there any instructions on how to compile my code using petsc makefile? Thanks,Qin On Tuesday, March 31, 2020, 12:16:31 PM CDT, Satish Balay wrote: Is your code a single source file?? multiple sourcefiles in a single dir? any external dependencies other than petsc? If possible - try compiling your code with petsc makefile. Does the code run correctly this way? Satish On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: >? I built and tested ex1f.F90 and ex2f.F90, both call?KSPCreate(), both work well. > My program is built using either MS Visual Studio or my own makefile. Are there any special compilation/link options required for my program in order to link with Petsc lib in Win-10? > Thanks,Qin >? ? On Tuesday, March 31, 2020, 11:51:43 AM CDT, Satish Balay wrote:? >? >? And use 'CHKERRA(ierr)' in your code to catch such failures early. > > Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 > > >>>>>>>>> > ? ? ? call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > ? ? ? if (ierr /= 0) then > ? ? ? ? write(6,*)'Unable to initialize PETSc' > ? ? ? ? stop > ? ? ? endif > ? ? ? > ? ? ? call PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) > ? ? ? CHKERRA(ierr) > <<<<<< > > etc.. > > Satish > > On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > > > Try PETSc examples with KSPCreate() - do they run correctly? > > > > How do you build your code - do you use petsc formatted makefile? > > > > Look for differences. Also run your code in valgrind on linux. Or you need to debug further on windows.. > > > > Satish > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > > In the MS Visual Studio debugger, I can see there are 2 calls before?KSPSetType: > > > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) ? > > > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call that got error. > > > > > > My program in Linux (also built with Intel compilers 2018) works without problem. > > > > > > Thanks, > > > > > > Qin > > > > > > > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay wrote:? > > >? > > >? Do PETSc examples that use KSPSetType() say src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > > > Its probably best to run your code in a debugger to determine the problem. > > > > > > [If your code can compile on linux - I'll also suggest running it with valgrind] > > > > > > Satish > > > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > > > >? Hello, > > > > I moved Intel-MPI libs to a directory without space, now the configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 worked well with mpiexec. However, my Fortran-90 program linked with this Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), same as what happened when using MPICH2. I suspect the issue is not in MPI, but in how Petsc is configured/built in Windows-10 using Intel compilers (the same program in Win-7 works without problem). The configuration is attached below. > > > > > > > > Do you any suggestions how to proceed? > > > > Thanks,Qin > > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi --with-blas-lapack-dir="/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib"? --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via petsc-users wrote:? > > > >? > > > >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > FYI: Program Files or Program Files(x86) is where windows installs all of its applications (from OS or installed by user). It is best to install your MPI and other packages in root dir C:. Thats why for example MinGW installs itself in there, so it doesn?t have to deal with the space in the path. > > > > > > > > No need to do this alternate install if using cygpath - as per installation instructions https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > > > Satish > > > > > > > > > > > > > > Best regards, > > > > > > > > > > Jacob Faibussowitsch > > > > > (Jacob Fai - booss - oh - vitch) > > > > > Cell: (312) 694-3391 > > > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users wrote: > > > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu wrote: > > > > > >> > > > > > >>> Hi, > > > > > >>> > > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. So I change to > > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't work. The > > > > > >>> config.log is attached. > > > > > >>> > > > > > >>> The following is my configuration: > > > > > >>> =============== > > > > > >>> > > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release-intel-mpi" > > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" --with-mpi-lib="/cygdrive/c/Program > > > > > >>> Files (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 --with-x11=0 > > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > > >>> > > > > > >>> ============= > > > > > >>> > > > > > >>> Thanks for any suggestions. > > > > > >>> > > > > > >> We just cannot cope with spaces in paths. Can you use the shortened > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html has the instructions > > > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > > > balay at ps5 ~ > > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft MPI/Bin/mpiexec'` > > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > >>? Thanks, > > > > > >> > > > > > >>? ? Matt > > > > > >> > > > > > >>> Regards, > > > > > >>> > > > > > >>> Qin > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > > >>> knepley at gmail.com> wrote: > > > > > >>> > > > > > >>> > > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > > >>> > > > > > >>> Hi Satish, > > > > > >>> > > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from command line. > > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI instead of > > > > > >>> MPICH2, probably because I have set the former's path in environment > > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI and build Petsc > > > > > >>> with Intel-MPI. > > > > > >>> > > > > > >>> As for the crash of calling to KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > > >>> my Fortran-90 program, do you have any idea what can be wrong? Can it be > > > > > >>> related to MPI? > > > > > >>> > > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you suggested, but got > > > > > >>> the following output: > > > > > >>> > > > > > >>> ============ > > > > > >>> python ./arch-ci-mswin-intel.py > > > > > >>> Traceback (most recent call last): > > > > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > > > > >>>? ? import configure > > > > > >>> ImportError: No module named configure > > > > > >>> ============ > > > > > >>> > > > > > >>> > > > > > >>> You have to run those from $PETSC_DIR. > > > > > >>> > > > > > >>>? Matt > > > > > >>> > > > > > >>> > > > > > >>> Thanks, > > > > > >>> Qin > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > > >>> > > > > > >>> Thanks, > > > > > >>> Qin > > > > > >>> > > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > >>> > > > > > >>> > > > > > >>> Please preserve cc: to the list > > > > > >>> > > > > > >>>> shared libraries: disabled > > > > > >>> > > > > > >>> So PETSc? is correctly built as static. > > > > > >>> > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > > >>> > > > > > >>> So its not clear which shared library this error is referring to. But then > > > > > >>> - this error was with petsc-3.4.2 > > > > > >>> > > > > > >>> You can always try to run the code manually without mpiexec - and see if > > > > > >>> that works. > > > > > >>> > > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > > >>> make ex2 > > > > > >>> ./ex2 > > > > > >>> > > > > > >>> Wrt MSMPI - yes its free to download > > > > > >>> > > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > > >>> download/install. [so I can't say if what you have is the correct install > > > > > >>> of IntelMPI or not] > > > > > >>> > > > > > >>> Check the builds we use for testing - for ex: > > > > > >>> config/examples/arch-ci-mswin-*.py > > > > > >>> > > > > > >>> Satish > > > > > >>> > > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > > >>> > > > > > >>>> Hi Satish, > > > > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is attached. > > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but for some > > > > > >>> reason I only find the /mpi/intel64/bin, but not /mpi/intel64/include > > > > > >>> subdirectory of it. > > > > > >>>> Thanks a lot for your help.Qin > > > > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > >>>> > > > > > >>>> MPICH is unsupported - and we haven't tested with it for a long time. > > > > > >>>> > > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > > >>> os/compilers/libraries. > > > > > >>>> > > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest Petsc-3.13.0]? > > > > > >>>> > > > > > >>>> We recommend 64bit MSMPI for windows. > > > > > >>>> > > > > > >>>> Satish > > > > > >>>> > > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > >>>> > > > > > >>>>> Hello, > > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 workstation using > > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > > >>> configuration/compilation/installation seem to finish without problem, but > > > > > >>> test program (ex19) failed since it could not find a shared lib. Then I > > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but it got run > > > > > >>> time crash when it calls KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > > >>> Petsc subroutines. Note that this package was built, tested and worked well > > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > > >>>>> > > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > > >>>>> > > > > > >>>>> The following is my configuration: > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> =============== > > > > > >>>>> > > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > >>> --with-cxx='win32fe icl' --with-petsc-arch="arch-win64-release" > > > > > >>> --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > >>> (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" --with-debugging=0 > > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > >>>>> > > > > > >>>>> =============== > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> The error message of running ex19 is: > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> ================= > > > > > >>>>> > > > > > >>>>> $ make PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > >>> test > > > > > >>>>> > > > > > >>>>> Running test examples to verify correct installation > > > > > >>>>> > > > > > >>>>> Using PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > > >>>>> > > > > > >>>>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > > > > > >>> MPI process > > > > > >>>>> > > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > >>>>> > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while loading shared > > > > > >>> libraries: ?: cannot open shared object file: No such file or directory > > > > > >>>>> > > > > > >>>>> ================= > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> Thanks a lot for any suggestions. > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> Best Regards, > > > > > >>>>> > > > > > >>>>> Qin > > > > > >>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> -- > > > > > >>> What most experimenters take for granted before they begin their > > > > > >>> experiments is infinitely more interesting than any results to which their > > > > > >>> experiments lead. > > > > > >>> -- Norbert Wiener > > > > > >>> > > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > > >>> > > > > > > > > > > > > > > >? > > >? > > >? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 31 13:49:08 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 31 Mar 2020 14:49:08 -0400 Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <216903193.1249363.1585680133244@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> <216903193.1249363.1585680133244@mail.yahoo.com> Message-ID: On Tue, Mar 31, 2020 at 2:42 PM Qin Lu via petsc-users < petsc-users at mcs.anl.gov> wrote: > My program has multiple files in a single directory, and there are some > other dependencies. Are you talking about /petsc-3.12.4/makefile? Is there > any instructions on how to compile my code using petsc makefile? > Yes, there is a chapter in the manual. Thanks, Matt > Thanks, > Qin > > On Tuesday, March 31, 2020, 12:16:31 PM CDT, Satish Balay < > balay at mcs.anl.gov> wrote: > > > Is your code a single source file? multiple sourcefiles in a single dir? > any external dependencies other than petsc? > > If possible - try compiling your code with petsc makefile. Does the code > run correctly this way? > > Satish > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > I built and tested ex1f.F90 and ex2f.F90, both call KSPCreate(), both > work well. > > My program is built using either MS Visual Studio or my own makefile. > Are there any special compilation/link options required for my program in > order to link with Petsc lib in Win-10? > > Thanks,Qin > > On Tuesday, March 31, 2020, 11:51:43 AM CDT, Satish Balay < > balay at mcs.anl.gov> wrote: > > > > And use 'CHKERRA(ierr)' in your code to catch such failures early. > > > > Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 > > > > >>>>>>>>> > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > if (ierr /= 0) then > > write(6,*)'Unable to initialize PETSc' > > stop > > endif > > > > call > PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) > > CHKERRA(ierr) > > <<<<<< > > > > etc.. > > > > Satish > > > > On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > > > > > Try PETSc examples with KSPCreate() - do they run correctly? > > > > > > How do you build your code - do you use petsc formatted makefile? > > > > > > Look for differences. Also run your code in valgrind on linux. Or you > need to debug further on windows.. > > > > > > Satish > > > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > > > > > In the MS Visual Studio debugger, I can see there are 2 calls > before KSPSetType: > > > > > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > > > > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call > that got error. > > > > > > > > My program in Linux (also built with Intel compilers 2018) works > without problem. > > > > > > > > Thanks, > > > > > > > > Qin > > > > > > > > > > > > > > > > > > > > > > > > On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay < > balay at mcs.anl.gov> wrote: > > > > > > > > Do PETSc examples that use KSPSetType() say > src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > > > > > Its probably best to run your code in a debugger to determine the > problem. > > > > > > > > [If your code can compile on linux - I'll also suggest running it > with valgrind] > > > > > > > > Satish > > > > > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > > > > > > Hello, > > > > > I moved Intel-MPI libs to a directory without space, now the > configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 > worked well with mpiexec. However, my Fortran-90 program linked with this > Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), > same as what happened when using MPICH2. I suspect the issue is not in MPI, > but in how Petsc is configured/built in Windows-10 using Intel compilers > (the same program in Win-7 works without problem). The configuration is > attached below. > > > > > > > > > > Do you any suggestions how to proceed? > > > > > Thanks,Qin > > > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe > ifort' --with-cxx='win32fe icl' > --with-petsc-arch="arch-win64-release-intel-mpi" > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > --with-blas-lapack-dir="/cygdrive/c/Program Files > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" > --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib" > --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 > --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > > > > > > > > On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via > petsc-users wrote: > > > > > > > > > > On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > > > > > >> We just cannot cope with spaces in paths. Can you use the > shortened > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > FYI: Program Files or Program Files(x86) is where windows > installs all of its applications (from OS or installed by user). It is best > to install your MPI and other packages in root dir C:. Thats why for > example MinGW installs itself in there, so it doesn?t have to deal with the > space in the path. > > > > > > > > > > No need to do this alternate install if using cygpath - as per > installation instructions > https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > Best regards, > > > > > > > > > > > > Jacob Faibussowitsch > > > > > > (Jacob Fai - booss - oh - vitch) > > > > > > Cell: (312) 694-3391 > > > > > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > > > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu > wrote: > > > > > > >> > > > > > > >>> Hi, > > > > > > >>> > > > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. > So I change to > > > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't > work. The > > > > > > >>> config.log is attached. > > > > > > >>> > > > > > > >>> The following is my configuration: > > > > > > >>> =============== > > > > > > >>> > > > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > > >>> --with-cxx='win32fe icl' > --with-petsc-arch="arch-win64-release-intel-mpi" > > > > > > >>> > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > >>> > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" > --with-mpi-lib="/cygdrive/c/Program > > > > > > >>> Files > (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib" --with- > > > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 > --with-x11=0 > > > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > > > >>> > > > > > > >>> ============= > > > > > > >>> > > > > > > >>> Thanks for any suggestions. > > > > > > >>> > > > > > > >> We just cannot cope with spaces in paths. Can you use the > shortened > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths > without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html > has the > instructions > > > > > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > > > > > balay at ps5 ~ > > > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft > MPI/Bin/mpiexec'` > > > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > > >> Thanks, > > > > > > >> > > > > > > >> Matt > > > > > > >> > > > > > > >>> Regards, > > > > > > >>> > > > > > > >>> Qin > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > > > >>> knepley at gmail.com> wrote: > > > > > > >>> > > > > > > >>> > > > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > > > >>> > > > > > > >>> Hi Satish, > > > > > > >>> > > > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from > command line. > > > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI > instead of > > > > > > >>> MPICH2, probably because I have set the former's path in > environment > > > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI > and build Petsc > > > > > > >>> with Intel-MPI. > > > > > > >>> > > > > > > >>> As for the crash of calling to > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > > > >>> my Fortran-90 program, do you have any idea what can be > wrong? Can it be > > > > > > >>> related to MPI? > > > > > > >>> > > > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you > suggested, but got > > > > > > >>> the following output: > > > > > > >>> > > > > > > >>> ============ > > > > > > >>> python ./arch-ci-mswin-intel.py > > > > > > >>> Traceback (most recent call last): > > > > > > >>> File "./arch-ci-mswin-intel.py", line 10, in > > > > > > >>> import configure > > > > > > >>> ImportError: No module named configure > > > > > > >>> ============ > > > > > > >>> > > > > > > >>> > > > > > > >>> You have to run those from $PETSC_DIR. > > > > > > >>> > > > > > > >>> Matt > > > > > > >>> > > > > > > >>> > > > > > > >>> Thanks, > > > > > > >>> Qin > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > > > >>> > > > > > > >>> Thanks, > > > > > > >>> Qin > > > > > > >>> > > > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > >>> > > > > > > >>> > > > > > > >>> Please preserve cc: to the list > > > > > > >>> > > > > > > >>>> shared libraries: disabled > > > > > > >>> > > > > > > >>> So PETSc is correctly built as static. > > > > > > >>> > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > loading shared > > > > > > >>> libraries: ?: cannot open shared object file: No such file > or directory > > > > > > >>> > > > > > > >>> So its not clear which shared library this error is > referring to. But then > > > > > > >>> - this error was with petsc-3.4.2 > > > > > > >>> > > > > > > >>> You can always try to run the code manually without mpiexec > - and see if > > > > > > >>> that works. > > > > > > >>> > > > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > > > >>> make ex2 > > > > > > >>> ./ex2 > > > > > > >>> > > > > > > >>> Wrt MSMPI - yes its free to download > > > > > > >>> > > > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > > > >>> download/install. [so I can't say if what you have is the > correct install > > > > > > >>> of IntelMPI or not] > > > > > > >>> > > > > > > >>> Check the builds we use for testing - for ex: > > > > > > >>> config/examples/arch-ci-mswin-*.py > > > > > > >>> > > > > > > >>> Satish > > > > > > >>> > > > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > > > >>> > > > > > > >>>> Hi Satish, > > > > > > >>>> The configure.log and RDict.log of Petsc-3.12.4 build is > attached. > > > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but > for some > > > > > > >>> reason I only find the /mpi/intel64/bin, but not > /mpi/intel64/include > > > > > > >>> subdirectory of it. > > > > > > >>>> Thanks a lot for your help.Qin > > > > > > >>>> On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > >>>> > > > > > > >>>> MPICH is unsupported - and we haven't tested with it for a > long time. > > > > > > >>>> > > > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > > > >>> os/compilers/libraries. > > > > > > >>>> > > > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest > Petsc-3.13.0]? > > > > > > >>>> > > > > > > >>>> We recommend 64bit MSMPI for windows. > > > > > > >>>> > > > > > > >>>> Satish > > > > > > >>>> > > > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > >>>> > > > > > > >>>>> Hello, > > > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 > workstation using > > > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > > > >>> configuration/compilation/installation seem to finish > without problem, but > > > > > > >>> test program (ex19) failed since it could not find a shared > lib. Then I > > > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but > it got run > > > > > > >>> time crash when it calls > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > > > >>> Petsc subroutines. Note that this package was built, tested > and worked well > > > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > > > >>>>> > > > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > > > >>>>> > > > > > > >>>>> The following is my configuration: > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> =============== > > > > > > >>>>> > > > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe > ifort' > > > > > > >>> --with-cxx='win32fe icl' > --with-petsc-arch="arch-win64-release" > > > > > > >>> > --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > >>> > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" > --with-debugging=0 > > > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 > --with-shared-libraries=0 > > > > > > >>>>> > > > > > > >>>>> =============== > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> The error message of running ex19 is: > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> ================= > > > > > > >>>>> > > > > > > >>>>> $ make > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > >>> test > > > > > > >>>>> > > > > > > >>>>> Running test examples to verify correct installation > > > > > > >>>>> > > > > > > >>>>> Using > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > > > >>>>> > > > > > > >>>>> Possible error running C/C++ > src/snes/examples/tutorials/ex19 with 1 > > > > > > >>> MPI process > > > > > > >>>>> > > > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > > >>>>> > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > loading shared > > > > > > >>> libraries: ?: cannot open shared object file: No such file > or directory > > > > > > >>>>> > > > > > > >>>>> ================= > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> Thanks a lot for any suggestions. > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> Best Regards, > > > > > > >>>>> > > > > > > >>>>> Qin > > > > > > >>> > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> -- > > > > > > >>> What most experimenters take for granted before they begin > their > > > > > > >>> experiments is infinitely more interesting than any results > to which their > > > > > > >>> experiments lead. > > > > > > >>> -- Norbert Wiener > > > > > > >>> > > > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > > > >>> http://www.cse.buffalo.edu/~knepley/>> > > > > > > > > > > > > > > > > > > > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 31 13:59:01 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 Mar 2020 13:59:01 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> <216903193.1249363.1585680133244@mail.yahoo.com> Message-ID: What other dependencies? For an example makefile that compiles multiple sources into a single binary [using gnumake - which is what you have] - check src/ts/examples/tutorials/multirate/makefile Satish On Tue, 31 Mar 2020, Matthew Knepley wrote: > On Tue, Mar 31, 2020 at 2:42 PM Qin Lu via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > My program has multiple files in a single directory, and there are some > > other dependencies. Are you talking about /petsc-3.12.4/makefile? Is there > > any instructions on how to compile my code using petsc makefile? > > > > Yes, there is a chapter in the manual. > > Thanks, > > Matt > > > > Thanks, > > Qin > > > > On Tuesday, March 31, 2020, 12:16:31 PM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > > Is your code a single source file? multiple sourcefiles in a single dir? > > any external dependencies other than petsc? > > > > If possible - try compiling your code with petsc makefile. Does the code > > run correctly this way? > > > > Satish > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > I built and tested ex1f.F90 and ex2f.F90, both call KSPCreate(), both > > work well. > > > My program is built using either MS Visual Studio or my own makefile. > > Are there any special compilation/link options required for my program in > > order to link with Petsc lib in Win-10? > > > Thanks,Qin > > > On Tuesday, March 31, 2020, 11:51:43 AM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > > And use 'CHKERRA(ierr)' in your code to catch such failures early. > > > > > > Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 > > > > > > >>>>>>>>> > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > > if (ierr /= 0) then > > > write(6,*)'Unable to initialize PETSc' > > > stop > > > endif > > > > > > call > > PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) > > > CHKERRA(ierr) > > > <<<<<< > > > > > > etc.. > > > > > > Satish > > > > > > On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > > > > > > > Try PETSc examples with KSPCreate() - do they run correctly? > > > > > > > > How do you build your code - do you use petsc formatted makefile? > > > > > > > > Look for differences. Also run your code in valgrind on linux. Or you > > need to debug further on windows.. > > > > > > > > Satish > > > > > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > > > > > > > > In the MS Visual Studio debugger, I can see there are 2 calls > > before KSPSetType: > > > > > > > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > > > > > > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > > > > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call > > that got error. > > > > > > > > > > My program in Linux (also built with Intel compilers 2018) works > > without problem. > > > > > > > > > > Thanks, > > > > > > > > > > Qin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > > > > > > Do PETSc examples that use KSPSetType() say > > src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > > > > > > > Its probably best to run your code in a debugger to determine the > > problem. > > > > > > > > > > [If your code can compile on linux - I'll also suggest running it > > with valgrind] > > > > > > > > > > Satish > > > > > > > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > > > > > > > > Hello, > > > > > > I moved Intel-MPI libs to a directory without space, now the > > configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 > > worked well with mpiexec. However, my Fortran-90 program linked with this > > Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), > > same as what happened when using MPICH2. I suspect the issue is not in MPI, > > but in how Petsc is configured/built in Windows-10 using Intel compilers > > (the same program in Win-7 works without problem). The configuration is > > attached below. > > > > > > > > > > > > Do you any suggestions how to proceed? > > > > > > Thanks,Qin > > > > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe > > ifort' --with-cxx='win32fe icl' > > --with-petsc-arch="arch-win64-release-intel-mpi" > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > --with-blas-lapack-dir="/cygdrive/c/Program Files > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" > > --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib" > > --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 > > --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > > > > > > > > > > > On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via > > petsc-users wrote: > > > > > > > > > > > > On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > > > > > > > >> We just cannot cope with spaces in paths. Can you use the > > shortened > > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > FYI: Program Files or Program Files(x86) is where windows > > installs all of its applications (from OS or installed by user). It is best > > to install your MPI and other packages in root dir C:. Thats why for > > example MinGW installs itself in there, so it doesn?t have to deal with the > > space in the path. > > > > > > > > > > > > No need to do this alternate install if using cygpath - as per > > installation instructions > > https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > Best regards, > > > > > > > > > > > > > > Jacob Faibussowitsch > > > > > > > (Jacob Fai - booss - oh - vitch) > > > > > > > Cell: (312) 694-3391 > > > > > > > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users < > > petsc-users at mcs.anl.gov> wrote: > > > > > > > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu > > wrote: > > > > > > > >> > > > > > > > >>> Hi, > > > > > > > >>> > > > > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. > > So I change to > > > > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't > > work. The > > > > > > > >>> config.log is attached. > > > > > > > >>> > > > > > > > >>> The following is my configuration: > > > > > > > >>> =============== > > > > > > > >>> > > > > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > > > >>> --with-cxx='win32fe icl' > > --with-petsc-arch="arch-win64-release-intel-mpi" > > > > > > > >>> > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > > >>> > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" > > --with-mpi-lib="/cygdrive/c/Program > > > > > > > >>> Files > > (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib" --with- > > > > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 > > --with-x11=0 > > > > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > > > > >>> > > > > > > > >>> ============= > > > > > > > >>> > > > > > > > >>> Thanks for any suggestions. > > > > > > > >>> > > > > > > > >> We just cannot cope with spaces in paths. Can you use the > > shortened > > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths > > without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html > > has the > > instructions > > > > > > > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > > > > > > > balay at ps5 ~ > > > > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft > > MPI/Bin/mpiexec'` > > > > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > > > >> Thanks, > > > > > > > >> > > > > > > > >> Matt > > > > > > > >> > > > > > > > >>> Regards, > > > > > > > >>> > > > > > > > >>> Qin > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > > > > >>> knepley at gmail.com> wrote: > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > > > > >>> > > > > > > > >>> Hi Satish, > > > > > > > >>> > > > > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from > > command line. > > > > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI > > instead of > > > > > > > >>> MPICH2, probably because I have set the former's path in > > environment > > > > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI > > and build Petsc > > > > > > > >>> with Intel-MPI. > > > > > > > >>> > > > > > > > >>> As for the crash of calling to > > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > > > > >>> my Fortran-90 program, do you have any idea what can be > > wrong? Can it be > > > > > > > >>> related to MPI? > > > > > > > >>> > > > > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you > > suggested, but got > > > > > > > >>> the following output: > > > > > > > >>> > > > > > > > >>> ============ > > > > > > > >>> python ./arch-ci-mswin-intel.py > > > > > > > >>> Traceback (most recent call last): > > > > > > > >>> File "./arch-ci-mswin-intel.py", line 10, in > > > > > > > >>> import configure > > > > > > > >>> ImportError: No module named configure > > > > > > > >>> ============ > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> You have to run those from $PETSC_DIR. > > > > > > > >>> > > > > > > > >>> Matt > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> Thanks, > > > > > > > >>> Qin > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > > > > >>> > > > > > > > >>> Thanks, > > > > > > > >>> Qin > > > > > > > >>> > > > > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> Please preserve cc: to the list > > > > > > > >>> > > > > > > > >>>> shared libraries: disabled > > > > > > > >>> > > > > > > > >>> So PETSc is correctly built as static. > > > > > > > >>> > > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > > loading shared > > > > > > > >>> libraries: ?: cannot open shared object file: No such file > > or directory > > > > > > > >>> > > > > > > > >>> So its not clear which shared library this error is > > referring to. But then > > > > > > > >>> - this error was with petsc-3.4.2 > > > > > > > >>> > > > > > > > >>> You can always try to run the code manually without mpiexec > > - and see if > > > > > > > >>> that works. > > > > > > > >>> > > > > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > > > > >>> make ex2 > > > > > > > >>> ./ex2 > > > > > > > >>> > > > > > > > >>> Wrt MSMPI - yes its free to download > > > > > > > >>> > > > > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > > > > >>> download/install. [so I can't say if what you have is the > > correct install > > > > > > > >>> of IntelMPI or not] > > > > > > > >>> > > > > > > > >>> Check the builds we use for testing - for ex: > > > > > > > >>> config/examples/arch-ci-mswin-*.py > > > > > > > >>> > > > > > > > >>> Satish > > > > > > > >>> > > > > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > > > > >>> > > > > > > > >>>> Hi Satish, > > > > > > > >>>> The configure.log and RDict.log of Petsc-3.12.4 build is > > attached. > > > > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but > > for some > > > > > > > >>> reason I only find the /mpi/intel64/bin, but not > > /mpi/intel64/include > > > > > > > >>> subdirectory of it. > > > > > > > >>>> Thanks a lot for your help.Qin > > > > > > > >>>> On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > > >>>> > > > > > > > >>>> MPICH is unsupported - and we haven't tested with it for a > > long time. > > > > > > > >>>> > > > > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > > > > >>> os/compilers/libraries. > > > > > > > >>>> > > > > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest > > Petsc-3.13.0]? > > > > > > > >>>> > > > > > > > >>>> We recommend 64bit MSMPI for windows. > > > > > > > >>>> > > > > > > > >>>> Satish > > > > > > > >>>> > > > > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > >>>> > > > > > > > >>>>> Hello, > > > > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 > > workstation using > > > > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > > > > >>> configuration/compilation/installation seem to finish > > without problem, but > > > > > > > >>> test program (ex19) failed since it could not find a shared > > lib. Then I > > > > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but > > it got run > > > > > > > >>> time crash when it calls > > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > > > > >>> Petsc subroutines. Note that this package was built, tested > > and worked well > > > > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > > > > >>>>> > > > > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > > > > >>>>> > > > > > > > >>>>> The following is my configuration: > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> =============== > > > > > > > >>>>> > > > > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe > > ifort' > > > > > > > >>> --with-cxx='win32fe icl' > > --with-petsc-arch="arch-win64-release" > > > > > > > >>> > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > > >>> > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" > > --with-debugging=0 > > > > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 > > --with-shared-libraries=0 > > > > > > > >>>>> > > > > > > > >>>>> =============== > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> The error message of running ex19 is: > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> ================= > > > > > > > >>>>> > > > > > > > >>>>> $ make > > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > > >>> test > > > > > > > >>>>> > > > > > > > >>>>> Running test examples to verify correct installation > > > > > > > >>>>> > > > > > > > >>>>> Using > > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > > > > >>>>> > > > > > > > >>>>> Possible error running C/C++ > > src/snes/examples/tutorials/ex19 with 1 > > > > > > > >>> MPI process > > > > > > > >>>>> > > > > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > > > >>>>> > > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > > loading shared > > > > > > > >>> libraries: ?: cannot open shared object file: No such file > > or directory > > > > > > > >>>>> > > > > > > > >>>>> ================= > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> Thanks a lot for any suggestions. > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> Best Regards, > > > > > > > >>>>> > > > > > > > >>>>> Qin > > > > > > > >>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> -- > > > > > > > >>> What most experimenters take for granted before they begin > > their > > > > > > > >>> experiments is infinitely more interesting than any results > > to which their > > > > > > > >>> experiments lead. > > > > > > > >>> -- Norbert Wiener > > > > > > > >>> > > > > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > > > > >>> > http://www.cse.buffalo.edu/~knepley/>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From lu_qin_2000 at yahoo.com Tue Mar 31 19:45:54 2020 From: lu_qin_2000 at yahoo.com (Qin Lu) Date: Wed, 1 Apr 2020 00:45:54 +0000 (UTC) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> <216903193.1249363.1585680133244@mail.yahoo.com> Message-ID: <683790334.64750.1585701954112@mail.yahoo.com> My program finally worked after built with Petsc-3.12.4 and its include files (obviously I missed the latter and still used Petsc-3.4.2's include files in my previous tests).? The conclusion is that the old Petsc-3.4.2 does not work for Win-10. Thanks a lot for helps from Satish, Matt and Jacob! Regards,Qin? On Tuesday, March 31, 2020, 01:59:03 PM CDT, Satish Balay wrote: What other dependencies? For an example makefile that compiles multiple sources into a single binary [using gnumake - which is what you have] - check src/ts/examples/tutorials/multirate/makefile Satish On Tue, 31 Mar 2020, Matthew Knepley wrote: > On Tue, Mar 31, 2020 at 2:42 PM Qin Lu via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > My program has multiple files in a single directory, and there are some > > other dependencies. Are you talking about /petsc-3.12.4/makefile? Is there > > any instructions on how to compile my code using petsc makefile? > > > > Yes, there is a chapter in the manual. > >? Thanks, > >? ? Matt > > > > Thanks, > > Qin > > > > On Tuesday, March 31, 2020, 12:16:31 PM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > > Is your code a single source file?? multiple sourcefiles in a single dir? > > any external dependencies other than petsc? > > > > If possible - try compiling your code with petsc makefile. Does the code > > run correctly this way? > > > > Satish > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > >? I built and tested ex1f.F90 and ex2f.F90, both call KSPCreate(), both > > work well. > > > My program is built using either MS Visual Studio or my own makefile. > > Are there any special compilation/link options required for my program in > > order to link with Petsc lib in Win-10? > > > Thanks,Qin > > >? ? On Tuesday, March 31, 2020, 11:51:43 AM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > >? And use 'CHKERRA(ierr)' in your code to catch such failures early. > > > > > > Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 > > > > > > >>>>>>>>> > > >? ? ? call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > >? ? ? if (ierr /= 0) then > > >? ? ? ? write(6,*)'Unable to initialize PETSc' > > >? ? ? ? stop > > >? ? ? endif > > > > > >? ? ? call > > PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) > > >? ? ? CHKERRA(ierr) > > > <<<<<< > > > > > > etc.. > > > > > > Satish > > > > > > On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > > > > > > > Try PETSc examples with KSPCreate() - do they run correctly? > > > > > > > > How do you build your code - do you use petsc formatted makefile? > > > > > > > > Look for differences. Also run your code in valgrind on linux. Or you > > need to debug further on windows.. > > > > > > > > Satish > > > > > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > > > > > > > > In the MS Visual Studio debugger, I can see there are 2 calls > > before KSPSetType: > > > > > > > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > > > > > > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > > > > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call > > that got error. > > > > > > > > > > My program in Linux (also built with Intel compilers 2018) works > > without problem. > > > > > > > > > > Thanks, > > > > > > > > > > Qin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay < > > balay at mcs.anl.gov> wrote: > > > > > > > > > >? Do PETSc examples that use KSPSetType() say > > src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > > > > > > > Its probably best to run your code in a debugger to determine the > > problem. > > > > > > > > > > [If your code can compile on linux - I'll also suggest running it > > with valgrind] > > > > > > > > > > Satish > > > > > > > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > > > > > > > >? Hello, > > > > > > I moved Intel-MPI libs to a directory without space, now the > > configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 > > worked well with mpiexec. However, my Fortran-90 program linked with this > > Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), > > same as what happened when using MPICH2. I suspect the issue is not in MPI, > > but in how Petsc is configured/built in Windows-10 using Intel compilers > > (the same program in Win-7 works without problem). The configuration is > > attached below. > > > > > > > > > > > > Do you any suggestions how to proceed? > > > > > > Thanks,Qin > > > > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe > > ifort' --with-cxx='win32fe icl' > > --with-petsc-arch="arch-win64-release-intel-mpi" > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > --with-blas-lapack-dir="/cygdrive/c/Program Files > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" > > --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib" > > --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 > > --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via > > petsc-users wrote: > > > > > > > > > > > >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > > > > > > > >> We just cannot cope with spaces in paths. Can you use the > > shortened > > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > FYI: Program Files or Program Files(x86) is where windows > > installs all of its applications (from OS or installed by user). It is best > > to install your MPI and other packages in root dir C:. Thats why for > > example MinGW installs itself in there, so it doesn?t have to deal with the > > space in the path. > > > > > > > > > > > > No need to do this alternate install if using cygpath - as per > > installation instructions > > https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > Best regards, > > > > > > > > > > > > > > Jacob Faibussowitsch > > > > > > > (Jacob Fai - booss - oh - vitch) > > > > > > > Cell: (312) 694-3391 > > > > > > > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users < > > petsc-users at mcs.anl.gov> wrote: > > > > > > > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu > > wrote: > > > > > > > >> > > > > > > > >>> Hi, > > > > > > > >>> > > > > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. > > So I change to > > > > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't > > work. The > > > > > > > >>> config.log is attached. > > > > > > > >>> > > > > > > > >>> The following is my configuration: > > > > > > > >>> =============== > > > > > > > >>> > > > > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > > > >>> --with-cxx='win32fe icl' > > --with-petsc-arch="arch-win64-release-intel-mpi" > > > > > > > >>> > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > > >>> > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" > > --with-mpi-lib="/cygdrive/c/Program > > > > > > > >>> Files > > (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > > > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 > > --with-x11=0 > > > > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > > > > >>> > > > > > > > >>> ============= > > > > > > > >>> > > > > > > > >>> Thanks for any suggestions. > > > > > > > >>> > > > > > > > >> We just cannot cope with spaces in paths. Can you use the > > shortened > > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths > > without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html > > has the > > instructions > > > > > > > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > > > > > > > balay at ps5 ~ > > > > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft > > MPI/Bin/mpiexec'` > > > > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > > > >>? Thanks, > > > > > > > >> > > > > > > > >>? ? Matt > > > > > > > >> > > > > > > > >>> Regards, > > > > > > > >>> > > > > > > > >>> Qin > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > > > > >>> knepley at gmail.com> wrote: > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > > > > >>> > > > > > > > >>> Hi Satish, > > > > > > > >>> > > > > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from > > command line. > > > > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI > > instead of > > > > > > > >>> MPICH2, probably because I have set the former's path in > > environment > > > > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI > > and build Petsc > > > > > > > >>> with Intel-MPI. > > > > > > > >>> > > > > > > > >>> As for the crash of calling to > > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > > > > >>> my Fortran-90 program, do you have any idea what can be > > wrong? Can it be > > > > > > > >>> related to MPI? > > > > > > > >>> > > > > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you > > suggested, but got > > > > > > > >>> the following output: > > > > > > > >>> > > > > > > > >>> ============ > > > > > > > >>> python ./arch-ci-mswin-intel.py > > > > > > > >>> Traceback (most recent call last): > > > > > > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > > > > > > >>>? ? import configure > > > > > > > >>> ImportError: No module named configure > > > > > > > >>> ============ > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> You have to run those from $PETSC_DIR. > > > > > > > >>> > > > > > > > >>>? Matt > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> Thanks, > > > > > > > >>> Qin > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > > > > >>> > > > > > > > >>> Thanks, > > > > > > > >>> Qin > > > > > > > >>> > > > > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> Please preserve cc: to the list > > > > > > > >>> > > > > > > > >>>> shared libraries: disabled > > > > > > > >>> > > > > > > > >>> So PETSc? is correctly built as static. > > > > > > > >>> > > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > > loading shared > > > > > > > >>> libraries: ?: cannot open shared object file: No such file > > or directory > > > > > > > >>> > > > > > > > >>> So its not clear which shared library this error is > > referring to. But then > > > > > > > >>> - this error was with petsc-3.4.2 > > > > > > > >>> > > > > > > > >>> You can always try to run the code manually without mpiexec > > - and see if > > > > > > > >>> that works. > > > > > > > >>> > > > > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > > > > >>> make ex2 > > > > > > > >>> ./ex2 > > > > > > > >>> > > > > > > > >>> Wrt MSMPI - yes its free to download > > > > > > > >>> > > > > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > > > > >>> download/install. [so I can't say if what you have is the > > correct install > > > > > > > >>> of IntelMPI or not] > > > > > > > >>> > > > > > > > >>> Check the builds we use for testing - for ex: > > > > > > > >>> config/examples/arch-ci-mswin-*.py > > > > > > > >>> > > > > > > > >>> Satish > > > > > > > >>> > > > > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > > > > >>> > > > > > > > >>>> Hi Satish, > > > > > > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is > > attached. > > > > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but > > for some > > > > > > > >>> reason I only find the /mpi/intel64/bin, but not > > /mpi/intel64/include > > > > > > > >>> subdirectory of it. > > > > > > > >>>> Thanks a lot for your help.Qin > > > > > > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > > >>>> > > > > > > > >>>> MPICH is unsupported - and we haven't tested with it for a > > long time. > > > > > > > >>>> > > > > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > > > > >>> os/compilers/libraries. > > > > > > > >>>> > > > > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest > > Petsc-3.13.0]? > > > > > > > >>>> > > > > > > > >>>> We recommend 64bit MSMPI for windows. > > > > > > > >>>> > > > > > > > >>>> Satish > > > > > > > >>>> > > > > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > >>>> > > > > > > > >>>>> Hello, > > > > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 > > workstation using > > > > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > > > > >>> configuration/compilation/installation seem to finish > > without problem, but > > > > > > > >>> test program (ex19) failed since it could not find a shared > > lib. Then I > > > > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but > > it got run > > > > > > > >>> time crash when it calls > > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > > > > >>> Petsc subroutines. Note that this package was built, tested > > and worked well > > > > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > > > > >>>>> > > > > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > > > > >>>>> > > > > > > > >>>>> The following is my configuration: > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> =============== > > > > > > > >>>>> > > > > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe > > ifort' > > > > > > > >>> --with-cxx='win32fe icl' > > --with-petsc-arch="arch-win64-release" > > > > > > > >>> > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > > >>> > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" > > --with-debugging=0 > > > > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 > > --with-shared-libraries=0 > > > > > > > >>>>> > > > > > > > >>>>> =============== > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> The error message of running ex19 is: > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> ================= > > > > > > > >>>>> > > > > > > > >>>>> $ make > > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > > >>> test > > > > > > > >>>>> > > > > > > > >>>>> Running test examples to verify correct installation > > > > > > > >>>>> > > > > > > > >>>>> Using > > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > > > > >>>>> > > > > > > > >>>>> Possible error running C/C++ > > src/snes/examples/tutorials/ex19 with 1 > > > > > > > >>> MPI process > > > > > > > >>>>> > > > > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > > > >>>>> > > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > > loading shared > > > > > > > >>> libraries: ?: cannot open shared object file: No such file > > or directory > > > > > > > >>>>> > > > > > > > >>>>> ================= > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> Thanks a lot for any suggestions. > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> Best Regards, > > > > > > > >>>>> > > > > > > > >>>>> Qin > > > > > > > >>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> -- > > > > > > > >>> What most experimenters take for granted before they begin > > their > > > > > > > >>> experiments is infinitely more interesting than any results > > to which their > > > > > > > >>> experiments lead. > > > > > > > >>> -- Norbert Wiener > > > > > > > >>> > > > > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > > > > >>> > http://www.cse.buffalo.edu/~knepley/>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 31 20:34:20 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 Mar 2020 20:34:20 -0500 (CDT) Subject: [petsc-users] Petsc not work in Windows-10 In-Reply-To: <683790334.64750.1585701954112@mail.yahoo.com> References: <262299643.779351.1585587381138@mail.yahoo.com> <1342653823.808354.1585593391072@mail.yahoo.com> <92682649.852381.1585601007985@mail.yahoo.com> <1508446702.952954.1585618053161@mail.yahoo.com> <1699619984.1183207.1585670093716@mail.yahoo.com> <1030007059.1152545.1585671584911@mail.yahoo.com> <216903193.1249363.1585680133244@mail.yahoo.com> <683790334.64750.1585701954112@mail.yahoo.com> Message-ID: Glad you have this working now. Its generally best to update to currently supported version of petsc [esp when upgrading other env like OS, compilers, libraries etc..]. And avoid such build issues by using makefiles that are portable.. Satish On Wed, 1 Apr 2020, Qin Lu wrote: > My program finally worked after built with Petsc-3.12.4 and its include files (obviously I missed the latter and still used Petsc-3.4.2's include files in my previous tests).? > The conclusion is that the old Petsc-3.4.2 does not work for Win-10. > Thanks a lot for helps from Satish, Matt and Jacob! > Regards,Qin? > On Tuesday, March 31, 2020, 01:59:03 PM CDT, Satish Balay wrote: > > What other dependencies? > > For an example makefile that compiles multiple sources into a single binary [using gnumake - which is what you have] - check > > src/ts/examples/tutorials/multirate/makefile > > Satish > > On Tue, 31 Mar 2020, Matthew Knepley wrote: > > > On Tue, Mar 31, 2020 at 2:42 PM Qin Lu via petsc-users < > > petsc-users at mcs.anl.gov> wrote: > > > > > My program has multiple files in a single directory, and there are some > > > other dependencies. Are you talking about /petsc-3.12.4/makefile? Is there > > > any instructions on how to compile my code using petsc makefile? > > > > > > > Yes, there is a chapter in the manual. > > > >? Thanks, > > > >? ? Matt > > > > > > > Thanks, > > > Qin > > > > > > On Tuesday, March 31, 2020, 12:16:31 PM CDT, Satish Balay < > > > balay at mcs.anl.gov> wrote: > > > > > > > > > Is your code a single source file?? multiple sourcefiles in a single dir? > > > any external dependencies other than petsc? > > > > > > If possible - try compiling your code with petsc makefile. Does the code > > > run correctly this way? > > > > > > Satish > > > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > >? I built and tested ex1f.F90 and ex2f.F90, both call KSPCreate(), both > > > work well. > > > > My program is built using either MS Visual Studio or my own makefile. > > > Are there any special compilation/link options required for my program in > > > order to link with Petsc lib in Win-10? > > > > Thanks,Qin > > > >? ? On Tuesday, March 31, 2020, 11:51:43 AM CDT, Satish Balay < > > > balay at mcs.anl.gov> wrote: > > > > > > > >? And use 'CHKERRA(ierr)' in your code to catch such failures early. > > > > > > > > Refer to example src/ksp/ksp/examples/tutorials/ex7f.F90 > > > > > > > > >>>>>>>>> > > > >? ? ? call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > > >? ? ? if (ierr /= 0) then > > > >? ? ? ? write(6,*)'Unable to initialize PETSc' > > > >? ? ? ? stop > > > >? ? ? endif > > > > > > > >? ? ? call > > > PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-m',m,flg,ierr) > > > >? ? ? CHKERRA(ierr) > > > > <<<<<< > > > > > > > > etc.. > > > > > > > > Satish > > > > > > > > On Tue, 31 Mar 2020, Satish Balay via petsc-users wrote: > > > > > > > > > Try PETSc examples with KSPCreate() - do they run correctly? > > > > > > > > > > How do you build your code - do you use petsc formatted makefile? > > > > > > > > > > Look for differences. Also run your code in valgrind on linux. Or you > > > need to debug further on windows.. > > > > > > > > > > Satish > > > > > > > > > > On Tue, 31 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > > > > > > > > > > > In the MS Visual Studio debugger, I can see there are 2 calls > > > before KSPSetType: > > > > > > > > > > > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > > > > > > > > > > > call KSPCreate(PETSC_COMM_WORLD,ksp_solver,ierr) > > > > > > > > > > > > It turns out KSPCreate returns ierr=1, so it is the first Petsc call > > > that got error. > > > > > > > > > > > > My program in Linux (also built with Intel compilers 2018) works > > > without problem. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Qin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 11:01:56 AM CDT, Satish Balay < > > > balay at mcs.anl.gov> wrote: > > > > > > > > > > > >? Do PETSc examples that use KSPSetType() say > > > src/ksp/ksp/tutorials/ex7f.F90 compile/run with this install? > > > > > > > > > > > > Its probably best to run your code in a debugger to determine the > > > problem. > > > > > > > > > > > > [If your code can compile on linux - I'll also suggest running it > > > with valgrind] > > > > > > > > > > > > Satish > > > > > > > > > > > > On Tue, 31 Mar 2020, Qin Lu wrote: > > > > > > > > > > > > >? Hello, > > > > > > > I moved Intel-MPI libs to a directory without space, now the > > > configuration/build of Petsc-3.12.4 worked with Intel-MPI, and test of ex2 > > > worked well with mpiexec. However, my Fortran-90 program linked with this > > > Petsc lib still crashed at calling KSPSetType(ksp_solver,KSPBCGS,ierr), > > > same as what happened when using MPICH2. I suspect the issue is not in MPI, > > > but in how Petsc is configured/built in Windows-10 using Intel compilers > > > (the same program in Win-7 works without problem). The configuration is > > > attached below. > > > > > > > > > > > > > > Do you any suggestions how to proceed? > > > > > > > Thanks,Qin > > > > > > > ============./configure --with-cc='win32fe icl' --with-fc='win32fe > > > ifort' --with-cxx='win32fe icl' > > > --with-petsc-arch="arch-win64-release-intel-mpi" > > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > --with-blas-lapack-dir="/cygdrive/c/Program Files > > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > --with-mpi-include="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/include" > > > --with-mpi-lib="/cygdrive/c/cygwin_cache/Intel-mpi-2019.6.166/intel64/lib/release/impi.lib" > > > --with-mpi-compilers=0--with-debugging=0 --useThreads=0 --with-x=0 > > > --with-x11=0 --with-xt=0 --with-shared-libraries=0 > > > > > > > > > > > > > > > > > > > > >? ? On Tuesday, March 31, 2020, 08:39:01 AM CDT, Satish Balay via > > > petsc-users wrote: > > > > > > > > > > > > > >? On Mon, 30 Mar 2020, Jacob Faibussowitsch wrote: > > > > > > > > > > > > > > > >> We just cannot cope with spaces in paths. Can you use the > > > shortened > > > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > > FYI: Program Files or Program Files(x86) is where windows > > > installs all of its applications (from OS or installed by user). It is best > > > to install your MPI and other packages in root dir C:. Thats why for > > > example MinGW installs itself in there, so it doesn?t have to deal with the > > > space in the path. > > > > > > > > > > > > > > No need to do this alternate install if using cygpath - as per > > > installation instructions > > > https://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > Best regards, > > > > > > > > > > > > > > > > Jacob Faibussowitsch > > > > > > > > (Jacob Fai - booss - oh - vitch) > > > > > > > > Cell: (312) 694-3391 > > > > > > > > > > > > > > > > > On Mar 30, 2020, at 9:18 PM, Satish Balay via petsc-users < > > > petsc-users at mcs.anl.gov> wrote: > > > > > > > > > > > > > > > > > > On Mon, 30 Mar 2020, Matthew Knepley wrote: > > > > > > > > > > > > > > > > > >> On Mon, Mar 30, 2020 at 9:28 PM Qin Lu > > > wrote: > > > > > > > > >> > > > > > > > > >>> Hi, > > > > > > > > >>> > > > > > > > > >>> I installed Intel-MPI 2019, and configured petsc-3.12.4 using > > > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files > > > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64", it didn't work. > > > So I change to > > > > > > > > >>> use --with-mpi-include and --with-mpi-lib, still didn't > > > work. The > > > > > > > > >>> config.log is attached. > > > > > > > > >>> > > > > > > > > >>> The following is my configuration: > > > > > > > > >>> =============== > > > > > > > > >>> > > > > > > > > >>> ./configure --with-cc='win32fe icl' --with-fc='win32fe ifort' > > > > > > > > >>> --with-cxx='win32fe icl' > > > --with-petsc-arch="arch-win64-release-intel-mpi" > > > > > > > > >>> > > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.12.4-release-win-64bit-intel-mpi > > > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > > > >>> > > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > > > >>> --with-mpi-include="/cygdrive/c/Program Files > > > > > > > > >>> (x86)/IntelSWTools/mpi/2019.6.166/intel64/include" > > > --with-mpi-lib="/cygdrive/c/Program > > > > > > > > >>> Files > > > (x86)/IntelSWTools/mpi/2019.6.166/intel64/lib/impicxx.lib"? --with- > > > > > > > > >>> mpi-compilers=0 --with-debugging=0 --useThreads=0 --with-x=0 > > > --with-x11=0 > > > > > > > > >>> --with-xt=0 --with-shared-libraries=0 > > > > > > > > >>> > > > > > > > > >>> ============= > > > > > > > > >>> > > > > > > > > >>> Thanks for any suggestions. > > > > > > > > >>> > > > > > > > > >> We just cannot cope with spaces in paths. Can you use the > > > shortened > > > > > > > > >> contiguous name instead of "Program File"? > > > > > > > > > > > > > > > > > > > > > > > > > > > Yeah - the config/examples/arch-ci-mswin*.py lists paths > > > without spaces - and https://www.mcs.anl.gov/petsc/documentation/installation.html > > > has the > > > instructions > > > > > > > > > > > > > > > > > > The way to get this is: (for example) > > > > > > > > > > > > > > > > > > balay at ps5 ~ > > > > > > > > > $ cygpath -u `cygpath -ms '/cygdrive/C/Program Files/Microsoft > > > MPI/Bin/mpiexec'` > > > > > > > > > /cygdrive/c/PROGRA~1/MICROS~2/Bin/mpiexec.exe > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > > > > >>? Thanks, > > > > > > > > >> > > > > > > > > >>? ? Matt > > > > > > > > >> > > > > > > > > >>> Regards, > > > > > > > > >>> > > > > > > > > >>> Qin > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> On Monday, March 30, 2020, 04:15:14 PM CDT, Matthew Knepley < > > > > > > > > >>> knepley at gmail.com> wrote: > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> On Mon, Mar 30, 2020 at 4:43 PM Qin Lu via petsc-users < > > > > > > > > >>> petsc-users at mcs.anl.gov> wrote: > > > > > > > > >>> > > > > > > > > >>> Hi Satish, > > > > > > > > >>> > > > > > > > > >>> The ex2.exe works with "mpiexec -np 2" when I ran it from > > > command line. > > > > > > > > >>> Then I ran "which mpiexec", it actually points to Intel-MPI > > > instead of > > > > > > > > >>> MPICH2, probably because I have set the former's path in > > > environment > > > > > > > > >>> variable PATH in Win-10. I will try to reinstall Intel-MPI > > > and build Petsc > > > > > > > > >>> with Intel-MPI. > > > > > > > > >>> > > > > > > > > >>> As for the crash of calling to > > > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) in > > > > > > > > >>> my Fortran-90 program, do you have any idea what can be > > > wrong? Can it be > > > > > > > > >>> related to MPI? > > > > > > > > >>> > > > > > > > > >>> I tested config/examples/arch-ci-mswin-intel.py as you > > > suggested, but got > > > > > > > > >>> the following output: > > > > > > > > >>> > > > > > > > > >>> ============ > > > > > > > > >>> python ./arch-ci-mswin-intel.py > > > > > > > > >>> Traceback (most recent call last): > > > > > > > > >>>? File "./arch-ci-mswin-intel.py", line 10, in > > > > > > > > >>>? ? import configure > > > > > > > > >>> ImportError: No module named configure > > > > > > > > >>> ============ > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> You have to run those from $PETSC_DIR. > > > > > > > > >>> > > > > > > > > >>>? Matt > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> Thanks, > > > > > > > > >>> Qin > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> I will try to use Intel-MPI and see what will happen. > > > > > > > > >>> > > > > > > > > >>> Thanks, > > > > > > > > >>> Qin > > > > > > > > >>> > > > > > > > > >>> On Monday, March 30, 2020, 01:47:49 PM CDT, Satish Balay < > > > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> Please preserve cc: to the list > > > > > > > > >>> > > > > > > > > >>>> shared libraries: disabled > > > > > > > > >>> > > > > > > > > >>> So PETSc? is correctly built as static. > > > > > > > > >>> > > > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > > > loading shared > > > > > > > > >>> libraries: ?: cannot open shared object file: No such file > > > or directory > > > > > > > > >>> > > > > > > > > >>> So its not clear which shared library this error is > > > referring to. But then > > > > > > > > >>> - this error was with petsc-3.4.2 > > > > > > > > >>> > > > > > > > > >>> You can always try to run the code manually without mpiexec > > > - and see if > > > > > > > > >>> that works. > > > > > > > > >>> > > > > > > > > >>> cd src/ksp/ksp/examples/tutorials > > > > > > > > >>> make ex2 > > > > > > > > >>> ./ex2 > > > > > > > > >>> > > > > > > > > >>> Wrt MSMPI - yes its free to download > > > > > > > > >>> > > > > > > > > >>> And PETSc does work with Intel-MPI. It might be a separate > > > > > > > > >>> download/install. [so I can't say if what you have is the > > > correct install > > > > > > > > >>> of IntelMPI or not] > > > > > > > > >>> > > > > > > > > >>> Check the builds we use for testing - for ex: > > > > > > > > >>> config/examples/arch-ci-mswin-*.py > > > > > > > > >>> > > > > > > > > >>> Satish > > > > > > > > >>> > > > > > > > > >>> On Mon, 30 Mar 2020, Qin Lu wrote: > > > > > > > > >>> > > > > > > > > >>>> Hi Satish, > > > > > > > > >>>> The configure.log and RDict.log of? Petsc-3.12.4 build is > > > attached. > > > > > > > > >>>> Is the MSMPI free to use in Windows-10? > > > > > > > > >>>> Does Petsc support Intel-MPI? I have it in my machine, but > > > for some > > > > > > > > >>> reason I only find the /mpi/intel64/bin, but not > > > /mpi/intel64/include > > > > > > > > >>> subdirectory of it. > > > > > > > > >>>> Thanks a lot for your help.Qin > > > > > > > > >>>>? On Monday, March 30, 2020, 12:26:09 PM CDT, Satish Balay < > > > > > > > > >>> balay at mcs.anl.gov> wrote: > > > > > > > > >>>> > > > > > > > > >>>> MPICH is unsupported - and we haven't tested with it for a > > > long time. > > > > > > > > >>>> > > > > > > > > >>>> And petsc-3.4.2 is from 2013 - and untested with current gen > > > > > > > > >>> os/compilers/libraries. > > > > > > > > >>>> > > > > > > > > >>>> Can you send logs from Petsc-3.12.4 build [or try latest > > > Petsc-3.13.0]? > > > > > > > > >>>> > > > > > > > > >>>> We recommend 64bit MSMPI for windows. > > > > > > > > >>>> > > > > > > > > >>>> Satish > > > > > > > > >>>> > > > > > > > > >>>> On Mon, 30 Mar 2020, Qin Lu via petsc-users wrote: > > > > > > > > >>>> > > > > > > > > >>>>> Hello, > > > > > > > > >>>>> I am trying to build Petsc-3.4.2 in my Windows-10 > > > workstation using > > > > > > > > >>> Cygwin, with Intel-2018 compilers and MKL, and MPICH2. The > > > > > > > > >>> configuration/compilation/installation seem to finish > > > without problem, but > > > > > > > > >>> test program (ex19) failed since it could not find a shared > > > lib. Then I > > > > > > > > >>> linked the libpetsc.lib with my program (in Fortran-90), but > > > it got run > > > > > > > > >>> time crash when it calls > > > KSPSetPCSide(ksp_solver,PC_RIGHT,ierr) or other > > > > > > > > >>> Petsc subroutines. Note that this package was built, tested > > > and worked well > > > > > > > > >>> with the same Fortran-90 program in my Windows-7 workstation. > > > > > > > > >>>>> > > > > > > > > >>>>> Also tried Petsc-3.12.4 but got the same errors. > > > > > > > > >>>>> > > > > > > > > >>>>> The following is my configuration: > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> =============== > > > > > > > > >>>>> > > > > > > > > >>>>> ./configure --with-cc='win32fe icl' --with-fc='win32fe > > > ifort' > > > > > > > > >>> --with-cxx='win32fe icl' > > > --with-petsc-arch="arch-win64-release" > > > > > > > > >>> > > > --prefix=/cygdrive/c/cygwin_cache/petsc-3.4.2-release-win-64bit > > > > > > > > >>> --with-blas-lapack-dir="/cygdrive/c/Program Files > > > > > > > > >>> > > > (x86)/IntelSWTools/compilers_and_libraries_2018.5.274/windows/mkl/lib/intel64" > > > > > > > > >>> --with-mpi-dir="/cygdrive/c/Program Files/mpich2x64" > > > --with-debugging=0 > > > > > > > > >>> --useThreads=0 --with-x=0 --with-x11=0 --with-xt=0 > > > --with-shared-libraries=0 > > > > > > > > >>>>> > > > > > > > > >>>>> =============== > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> The error message of running ex19 is: > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> ================= > > > > > > > > >>>>> > > > > > > > > >>>>> $ make > > > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > > > >>> test > > > > > > > > >>>>> > > > > > > > > >>>>> Running test examples to verify correct installation > > > > > > > > >>>>> > > > > > > > > >>>>> Using > > > PETSC_DIR=/cygdrive/c/cygwin_cache/petsc-3.4.2-debug-win-64bit > > > > > > > > >>> and PETSC_ARCH=arch-win64-debug > > > > > > > > >>>>> > > > > > > > > >>>>> Possible error running C/C++ > > > src/snes/examples/tutorials/ex19 with 1 > > > > > > > > >>> MPI process > > > > > > > > >>>>> > > > > > > > > >>>>> See http://www.mcs.anl.gov/petsc/documentation/faq.html > > > > > > > > >>>>> > > > > > > > > >>>>> C:/Program Files/mpich2x64/bin/mpiexec.exe: error while > > > loading shared > > > > > > > > >>> libraries: ?: cannot open shared object file: No such file > > > or directory > > > > > > > > >>>>> > > > > > > > > >>>>> ================= > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> Thanks a lot for any suggestions. > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> Best Regards, > > > > > > > > >>>>> > > > > > > > > >>>>> Qin > > > > > > > > >>> > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> -- > > > > > > > > >>> What most experimenters take for granted before they begin > > > their > > > > > > > > >>> experiments is infinitely more interesting than any results > > > to which their > > > > > > > > >>> experiments lead. > > > > > > > > >>> -- Norbert Wiener > > > > > > > > >>> > > > > > > > > >>> https://www.cse.buffalo.edu/~knepley/ > > > > > > > > >>> > > http://www.cse.buffalo.edu/~knepley/>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From hzhang at mcs.anl.gov Tue Mar 31 22:51:07 2020 From: hzhang at mcs.anl.gov (Zhang, Hong) Date: Wed, 1 Apr 2020 03:51:07 +0000 Subject: [petsc-users] AIJ vs BAIJ when using ILU factorization In-Reply-To: References: Message-ID: Fande, Checking aij.result: Mat Object: () 1 MPI processes type: seqaij rows=25816, cols=25816, bs=4 total: nonzeros=1297664, allocated nonzeros=1297664 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 6454 nodes, limit used is 5 i.e., it uses bs=4 with I-node. The implementation of MatSolve() is similar to baij with bs=4. What happens if you try aij with '-matload_block_size 1 -mat_no_inode true'? Hong ________________________________ From: petsc-users on behalf of Fande Kong Sent: Monday, March 30, 2020 12:25 PM To: PETSc users list Subject: [petsc-users] AIJ vs BAIJ when using ILU factorization Hi All, There is a system of equations arising from the discretization of 3D incompressible Navier-Stoke equations using a finite element method. 4 unknowns are placed on each mesh point, and then there is a 4x4 saddle point block on each mesh vertex. I was thinking to solve the linear equations using an incomplete LU factorization (that will be eventually used as a subdomain solver for ASM). Right now, I am trying to study the ILU performance using AIJ and BAIJ, respectively. From my understanding, BAIJ should give me better results since it inverses the 4x4 blocks exactly, while AIJ does not. However, I found that both BAIJ and AIJ gave me identical results in terms of the number of iterations. Was that just coincident? Or in theory, they are just identical. I understand the runtimes may be different because BAIJ has a better data locality. Please see the attached files for the results and solver configuration. Thanks, Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: