From junming.duan at epfl.ch Thu Jun 1 00:45:18 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Thu, 1 Jun 2023 05:45:18 +0000 Subject: [petsc-users] Gauss-Lobatto-Legendre Element Gradient -- Caught signal number 11 SEGV Message-ID: Dear all, I have a simple demo code attached below, which gives a segmentation violation error. Can you help me with this problem? I think the problem is due to the destroy function. I am using version 3.19.2 with debugging. #include static char help[] = "test.\n"; int main(int argc, char *argv[]) { PetscCall(PetscInitialize(&argc, &argv, 0, help)); PetscScalar *nodes; PetscScalar *weights; PetscScalar **diff; PetscInt n = 3; PetscCall(PetscMalloc2(n, &nodes, n, &weights)); PetscCall(PetscDTGaussLobattoLegendreQuadrature(n, PETSCGAUSSLOBATTOLEGENDRE_VIA_LINEAR_ALGEBRA, nodes, weights)); PetscCall(PetscGaussLobattoLegendreElementGradientCreate(n, nodes, weights, &diff, NULL)); PetscCall(PetscGaussLobattoLegendreElementGradientDestroy(n, nodes, weights, &diff, NULL)); PetscCall(PetscFree2(nodes, weights)); PetscCall(PetscFinalize()); return 0; } Junming -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jun 1 06:24:02 2023 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 1 Jun 2023 07:24:02 -0400 Subject: [petsc-users] Gauss-Lobatto-Legendre Element Gradient -- Caught signal number 11 SEGV In-Reply-To: References: Message-ID: On Thu, Jun 1, 2023 at 1:46?AM Duan Junming via petsc-users < petsc-users at mcs.anl.gov> wrote: > Dear all, > > > I have a simple demo code attached below, which gives a segmentation > violation error. > > Can you help me with this problem? I think the problem is due to the > destroy function. > > I am using version 3.19.2 with debugging. > Yes, the check is wrong. I will fix it. For now, you can just pass in the transpose argument as well. Thanks, Matt > #include > > static char help[] = "test.\n"; > int main(int argc, char *argv[]) { > PetscCall(PetscInitialize(&argc, &argv, 0, help)); > PetscScalar *nodes; > PetscScalar *weights; > PetscScalar **diff; > PetscInt n = 3; > PetscCall(PetscMalloc2(n, &nodes, n, &weights)); > PetscCall(PetscDTGaussLobattoLegendreQuadrature(n, > PETSCGAUSSLOBATTOLEGENDRE_VIA_LINEAR_ALGEBRA, nodes, weights)); > PetscCall(PetscGaussLobattoLegendreElementGradientCreate(n, nodes, > weights, &diff, NULL)); > PetscCall(PetscGaussLobattoLegendreElementGradientDestroy(n, nodes, > weights, &diff, NULL)); > PetscCall(PetscFree2(nodes, weights)); > PetscCall(PetscFinalize()); > return 0; > } > > > Junming > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From arsennnic at gmail.com Thu Jun 1 09:01:24 2023 From: arsennnic at gmail.com (Hawk Shaw) Date: Thu, 1 Jun 2023 22:01:24 +0800 Subject: [petsc-users] Failed to configure on Windows with latest Intel OneAPI Message-ID: Hi, I failed to configue PETSc on Windows with latest Intel OneAPI toolkit: ./configure --with-cc="win32fe cl" --with-cxx="win32fe cl" --with-fc=0 \ --with-debugging=0 --with-shared-libraries=0 --with-x=0 --with-quad-precision=0 \ --with-threadcomm=1 --with-openmp=1 \ --with-blaslapack-include="$MKLROOT/include" \ --with-blaslapack-lib="-L$MKLROOT/lib/intel64 mkl_core.lib mkl_intel_thread.lib mkl_intel_lp64.lib libiomp5md.lib" \ --with-mpi-include="$I_MPI_ROOT/include" \ --with-mpi-lib="-L$I_MPI_ROOT/lib/release impi.lib" \ --with-mpiexec="$I_MPI_ROOT/bin/mpiexec" \ --ignore-cygwin-link However, the configuration was successful with Intel OneAPI 2022.1.0.93 or previous version. Error message: ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= ============================================================================================= ***** WARNING ***** Using default optimization C flags "-O". You might consider manually setting optimal optimization flags for your system with COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for examples ============================================================================================= ============================================================================================= ***** WARNING ***** Using default Cxx optimization flags "-O". You might consider manually setting optimal optimization flags for your system with CXXOPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for examples ============================================================================================= TESTING: checkCxxLibraries from config.compilers(config/BuildSystem/config/compilers.py:450) ********************************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): --------------------------------------------------------------------------------------------- Cxx libraries cannot directly be used with C as linker. If you don't need the C++ compiler to build external packages or for you application you can run ./configure with --with-cxx=0. Otherwise you need a different combination of C and C++ compilers ********************************************************************************************* makefile:24: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules: No such file or directory make[1]: *** No rule to make target '/cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules'. Stop. gmakefile:67: arch-mswin-c-opt/lib/petsc/conf/files: No such file or directory make: *** [GNUmakefile:17: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/include/petscconf.h] Error 2 makefile:24: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules: No such file or directory make[1]: *** No rule to make target '/cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules'. Stop. /cygdrive/e/petsc-v3.19.2/lib/petsc/conf/variables:140: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscvariables: No such file or directory make: *** [GNUmakefile:17: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscvariables] Error 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jun 1 12:20:20 2023 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 1 Jun 2023 13:20:20 -0400 Subject: [petsc-users] Failed to configure on Windows with latest Intel OneAPI In-Reply-To: References: Message-ID: Please send configure.log to petsc-maint at mcs.anl.gov > On Jun 1, 2023, at 10:01 AM, Hawk Shaw wrote: > > Hi, > > I failed to configue PETSc on Windows with latest Intel OneAPI toolkit: > > ./configure --with-cc="win32fe cl" --with-cxx="win32fe cl" --with-fc=0 \ > --with-debugging=0 --with-shared-libraries=0 --with-x=0 --with-quad-precision=0 \ > --with-threadcomm=1 --with-openmp=1 \ > --with-blaslapack-include="$MKLROOT/include" \ > --with-blaslapack-lib="-L$MKLROOT/lib/intel64 mkl_core.lib mkl_intel_thread.lib mkl_intel_lp64.lib libiomp5md.lib" \ > --with-mpi-include="$I_MPI_ROOT/include" \ > --with-mpi-lib="-L$I_MPI_ROOT/lib/release impi.lib" \ > --with-mpiexec="$I_MPI_ROOT/bin/mpiexec" \ > --ignore-cygwin-link > > However, the configuration was successful with Intel OneAPI 2022.1.0.93 or previous version. > > Error message: > > ============================================================================================= > Configuring PETSc to compile on your system > ============================================================================================= > ============================================================================================= > ***** WARNING ***** > Using default optimization C flags "-O". You might consider manually setting optimal > optimization flags for your system with COPTFLAGS="optimization flags" see > config/examples/arch-*-opt.py for examples > ============================================================================================= > ============================================================================================= > ***** WARNING ***** > Using default Cxx optimization flags "-O". You might consider manually setting optimal > optimization flags for your system with CXXOPTFLAGS="optimization flags" see > config/examples/arch-*-opt.py for examples > ============================================================================================= > TESTING: checkCxxLibraries from config.compilers(config/BuildSystem/config/compilers.py:450) > ********************************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > --------------------------------------------------------------------------------------------- > Cxx libraries cannot directly be used with C as linker. > If you don't need the C++ compiler to build external packages or for you application you > can run > ./configure with --with-cxx=0. Otherwise you need a different combination of C and C++ > compilers > ********************************************************************************************* > > makefile:24: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules: No such file or directory > make[1]: *** No rule to make target '/cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules'. Stop. > gmakefile:67: arch-mswin-c-opt/lib/petsc/conf/files: No such file or directory > make: *** [GNUmakefile:17: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/include/petscconf.h] Error 2 > makefile:24: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules: No such file or directory > make[1]: *** No rule to make target '/cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules'. Stop. > /cygdrive/e/petsc-v3.19.2/lib/petsc/conf/variables:140: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscvariables: No such file or directory > make: *** [GNUmakefile:17: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscvariables] Error 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jun 1 20:24:39 2023 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 1 Jun 2023 21:24:39 -0400 Subject: [petsc-users] Failed to configure on Windows with latest Intel OneAPI In-Reply-To: References: Message-ID: <07956453-3EB0-4B2D-BF84-97332E3B1D1D@petsc.dev> If you try to access it from the Windows side the file is a link that is not easily dealt with. You need to find some way to access and send the file from Cygwin or copy it to another machine with a sane operating system. Barry > On Jun 1, 2023, at 9:12 PM, Hawk Shaw wrote: > > When I try to open configure.log with Notepad, it says "the file cannot be accessed by the system". > Besides, the file size is 0 bytes. ? > > > Barry Smith > ?2023?6?2??? 01:20??? >> >> Please send configure.log to petsc-maint at mcs.anl.gov >> >>> On Jun 1, 2023, at 10:01 AM, Hawk Shaw > wrote: >>> >>> Hi, >>> >>> I failed to configue PETSc on Windows with latest Intel OneAPI toolkit: >>> >>> ./configure --with-cc="win32fe cl" --with-cxx="win32fe cl" --with-fc=0 \ >>> --with-debugging=0 --with-shared-libraries=0 --with-x=0 --with-quad-precision=0 \ >>> --with-threadcomm=1 --with-openmp=1 \ >>> --with-blaslapack-include="$MKLROOT/include" \ >>> --with-blaslapack-lib="-L$MKLROOT/lib/intel64 mkl_core.lib mkl_intel_thread.lib mkl_intel_lp64.lib libiomp5md.lib" \ >>> --with-mpi-include="$I_MPI_ROOT/include" \ >>> --with-mpi-lib="-L$I_MPI_ROOT/lib/release impi.lib" \ >>> --with-mpiexec="$I_MPI_ROOT/bin/mpiexec" \ >>> --ignore-cygwin-link >>> >>> However, the configuration was successful with Intel OneAPI 2022.1.0.93 or previous version. >>> >>> Error message: >>> >>> ============================================================================================= >>> Configuring PETSc to compile on your system >>> ============================================================================================= >>> ============================================================================================= >>> ***** WARNING ***** >>> Using default optimization C flags "-O". You might consider manually setting optimal >>> optimization flags for your system with COPTFLAGS="optimization flags" see >>> config/examples/arch-*-opt.py for examples >>> ============================================================================================= >>> ============================================================================================= >>> ***** WARNING ***** >>> Using default Cxx optimization flags "-O". You might consider manually setting optimal >>> optimization flags for your system with CXXOPTFLAGS="optimization flags" see >>> config/examples/arch-*-opt.py for examples >>> ============================================================================================= >>> TESTING: checkCxxLibraries from config.compilers(config/BuildSystem/config/compilers.py:450) >>> ********************************************************************************************* >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): >>> --------------------------------------------------------------------------------------------- >>> Cxx libraries cannot directly be used with C as linker. >>> If you don't need the C++ compiler to build external packages or for you application you >>> can run >>> ./configure with --with-cxx=0. Otherwise you need a different combination of C and C++ >>> compilers >>> ********************************************************************************************* >>> >>> makefile:24: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules: No such file or directory >>> make[1]: *** No rule to make target '/cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules'. Stop. >>> gmakefile:67: arch-mswin-c-opt/lib/petsc/conf/files: No such file or directory >>> make: *** [GNUmakefile:17: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/include/petscconf.h] Error 2 >>> makefile:24: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules: No such file or directory >>> make[1]: *** No rule to make target '/cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscrules'. Stop. >>> /cygdrive/e/petsc-v3.19.2/lib/petsc/conf/variables:140: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscvariables: No such file or directory >>> make: *** [GNUmakefile:17: /cygdrive/e/petsc-v3.19.2/arch-mswin-c-opt/lib/petsc/conf/petscvariables] Error 2 >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42862 bytes Desc: not available URL: From bsmith at petsc.dev Fri Jun 2 10:47:53 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 2 Jun 2023 11:47:53 -0400 Subject: [petsc-users] Build Error In-Reply-To: References: <5DF1A47F-1466-4F02-B77F-1E154B4C2609@petsc.dev> <3747B62E-D651-4D49-9C3C-33CD32E95FAA@petsc.dev> Message-ID: <58F25413-7779-4B23-B142-82681C798B92@petsc.dev> Ok, beginning to understand the problem For some reason the HDF build is deciding it needs to rebuild the GNU configure scripts instead of just using those in the tarball. I am not sure why the build is trying to rebuild the GNU configure stuff, that is how/why the line Executing: HDF5_ACLOCAL=$(which aclocal) HDF5_AUTOHEADER=$(which autoheader) HDF5_AUTOMAKE=$(which automake) HDF5_AUTOCONF=$(which autoconf) HDF5_LIBTOOL=$(which libtool) HDF5_M4=$(which m4) ./autogen.sh stdout: is being triggered. I am not sure if it is PETSc or HDF causing this to happen. I cannot find the above command anywhere in PETSc or in HDF5 so I have no clue where it came from? Moose? During this autogen.sh process the HDF5 tools cannot handle the fact that Apple does not provide the GNU libtoolize package (Apple actually provides something completely different but with the same name). (Even though there autogen.sh has code that is suppose to work on Apple I expect it is broken). I see you have installed brew libtool and it is in the beginning of your path so I am not sure why hdf5 autogen is no working. I got PETSc v3.16.6 to build on my similar Mac (it did not trigger a ./autogen.sh) so I am not sure why it does not work for you and triggers the (what should be unneeded) ./autogen.sh Executing: HDF5_ACLOCAL=$(which aclocal) HDF5_AUTOHEADER=$(which autoheader) HDF5_AUTOMAKE=$(which automake) HDF5_AUTOCONF=$(which autoconf) HDF5_LIBTOOL=$(which libtool) HDF5_M4=$(which m4) ./autogen.sh stdout: ************************** * HDF5 autogen.sh script * ************************** Running trace script: Finished processing HDF5 API calls Running error generation script: Generating 'H5Epubgen.h' Generating 'H5Einit.h' Generating 'H5Eterm.h' Generating 'H5Edefin.h' Running API version generation script: Generating 'src/H5version.h' Running overflow macro generation script: Generating 'H5overflow.h' /usr/bin/libtoolize --copy --force **** Configure header /var/folders/7k/n1k8z33d6wb1p15hrflkyq540000gn/T/petsc-s9z775lb/confdefs.h **** #if !defined(INCLUDED_UNKNOWN) #define INCLUDED_UNKNOWN Could not execute "['HDF5_ACLOCAL=$(which aclocal) HDF5_AUTOHEADER=$(which autoheader) HDF5_AUTOMAKE=$(which automake) HDF5_AUTOCONF=$(which autoconf) HDF5_LIBTOOL=$(which libtool) HDF5_M4=$(which m4) ./autogen.sh']": ************************** * HDF5 autogen.sh script * ************************** Running trace script: Finished processing HDF5 API calls Running error generation script: Generating 'H5Epubgen.h' Generating 'H5Einit.h' Generating 'H5Eterm.h' Generating 'H5Edefin.h' Running API version generation script: Generating 'src/H5version.h' Running overflow macro generation script: Generating 'H5overflow.h' /usr/bin/libtoolize --copy --forceusage: dirname string [...] ./autogen.sh: line 256: /usr/bin/libtoolize: No such file or directory ? > On Jun 2, 2023, at 10:55 AM, SENECA, MICHAEL wrote: > > Hi Barry, > > I found the configure log. Hopefully, this helps to determine what is wrong? > > Thanks! > Michael > From: Barry Smith > > Date: Friday, May 26, 2023 at 12:41 PM > To: SENECA, MICHAEL > > Cc: PETSc users list > > Subject: Re: [petsc-users] Build Error > > > When PETSc is configured it creates a file configure.log in the PETSC_DIR directory; when make is run on it it creates a make.log file. > > The configure.log file is very useful for figuring anything that goes wrong. > > > > On May 26, 2023, at 1:32 PM, SENECA, MICHAEL > wrote: > > Where might I find the configure.log? Or do you mean the petsc build log information? > > > From: Barry Smith > > Date: Friday, May 26, 2023 at 9:40 AM > To: SENECA, MICHAEL > > Cc: petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] Build Error > > > PETSc configure is suppose to handle this cleanly. Please send configure.log to petsc-maint at mcs.anl.gov as we need more context to understand why it is not working. > > PETSc configure looks for libgtoolize (which is what brew names it) and uses it for libtoolize > > You can use --with-libtoolize-exec=pathtolibtoolize (or --with-libtoolize=pathtolibtoolize for older versions of PETSc) to select the executable PETSc uses > > > > > > > On May 25, 2023, at 1:35 PM, SENECA, MICHAEL via petsc-users > wrote: > > Hi all, > > I have been attempting to install cardinal on my new MacBook M2 Pro chip but I have run into some errors when attempting to build petsc, the script attempts to access > /usr/bin/libtoolize > which does not exist on my MacBook. I have libtoolize installed via homebrew and have made a link from the homebrew installation to > usr/local/bin/libtoolize > But the script does not look in the local directory. From what I have gathered online, the /usr/bin/ should not be edited as it is managed by macOS and its system software which can lead to system instability. Do any of you know of a way around to get the petsc script to just look for libtoolize from my path and not /usr/bin/libtoolize? > > Best regards, > Michael Seneca > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1314322 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1314322 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From franz.pichler at v2c2.at Tue Jun 6 09:14:08 2023 From: franz.pichler at v2c2.at (Pichler, Franz) Date: Tue, 6 Jun 2023 14:14:08 +0000 Subject: [petsc-users] Petsc ObjectStateIncrease without proivate header In-Reply-To: References: Message-ID: Hello, very sorry for the very late reply, but thank you even more for the ver helpful suggestion! Using valgrinds callgrind I cann see that matassemblyBegin/End takes some cycles, but I guess I can take take, a perfect solution would have no overhead, Ii still changed the code to get rid of the private dependency, Thank you! From: Stefano Zampini Sent: Monday, May 8, 2023 3:31 PM To: Pichler, Franz Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Petsc ObjectStateIncrease without proivate header You can achieve the same effect by calling MatAssemblyBegin/End Il giorno lun 8 mag 2023 alle ore 15:54 Pichler, Franz > ha scritto: Hello, i am using petsc in a single cpu setup where I have preassembled crs matrices that I wrap via PetSC?s MatCreateSeqAIJWithArrays Functionality. Now I manipulate the values of these matrices (wohtout changing the sparsity) without using petsc, When I now want to solve again I have to call PetscObjectStateIncrease((PetscObject)petsc_A); So that oetsc actually solves again (otherwise thinking nothing hs changed , This means I have to include the private header #include Which makes a seamingless implementation of petsc into a cmake process more complicate (This guy has to be stated explicitly in the cmake process at the moment) I would like to resolve that by ?going? around the private header, My first intuition was to increase the state by hand ((PetscObject)petsc_A_aux[the_sys])->state++; This is the definition of petscstateincrease in the header. This throws me an error: invalid use of incomplete type ?struct _p_PetscObject? compilation error. Is there any elegeant way around this? This is the first time I use the petsc mailing list so apologies for any beginners mistake I did in formatting or anything else. Best regards Franz Pichler -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From franz.pichler at v2c2.at Tue Jun 6 09:24:38 2023 From: franz.pichler at v2c2.at (Pichler, Franz) Date: Tue, 6 Jun 2023 14:24:38 +0000 Subject: [petsc-users] Petsc using VecDuplicate in solution process Message-ID: Hello, I was just investigating my KSP_Solve_BCGS Routine with algrandcallgrind, I see there that petsc is using a vecduplicate (onvolvin malloc and copy) every time it is called. I call it several thousand times (time evolution problem with rather small matrices) I am not quite sure which vector is copied there but I guess is the initial guess or the rhs, Is there a tool in petsc to avoid any vecduplication by providing a fixed memory for this vector? Some corner facts of my routine: I assemble the matrices(crs,serial) and vectors myself and then use MatCreateSeqAIJWithArrays and VecCreateSeqWithArray To wrap petsc around it, I use a ILU preconditioner and the sparsity patterns between the calls to not change, the values do, Thank you for any hint how to avoid the vecduplicate, Best regards Franz Dr. Franz Pichler Lead Researcher Area E Virtual Vehicle Research GmbH Inffeldgasse 21a, 8010 Graz, Austria Phone: +43 316 873 9818 franz.pichler at v2c2.at www.v2c2.at Firmenname: Virtual Vehicle Research GmbH Rechtsform: Gesellschaft mit beschr?nkter Haftung Firmensitz: Austria, 8010 Graz, Inffeldgasse 21/A Firmenbuchnummer: 224755y Firmenbuchgericht: Landesgericht f?r ZRS Graz UID: ATU54713500 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Tue Jun 6 09:39:38 2023 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Tue, 6 Jun 2023 09:39:38 -0500 Subject: [petsc-users] Petsc using VecDuplicate in solution process In-Reply-To: References: Message-ID: Il giorno mar 6 giu 2023 alle ore 09:24 Pichler, Franz < franz.pichler at v2c2.at> ha scritto: > Hello, > > I was just investigating my KSP_Solve_BCGS Routine with algrandcallgrind, > > I see there that petsc is using a vecduplicate (onvolvin malloc and copy) > every time it is called. > Do you mean KSPSolve_BCGS? There's only one VecDuplicate in there and it is called only once. An example code showing the problem would help > > I call it several thousand times (time evolution problem with rather small > matrices) > > > > I am not quite sure which vector is copied there but I guess is the > initial guess or the rhs, > > Is there a tool in petsc to avoid any vecduplication by providing a fixed > memory for this vector? > > Some corner facts of my routine: > > I assemble the matrices(crs,serial) and vectors myself and then use > > MatCreateSeqAIJWithArrays and VecCreateSeqWithArray > > To wrap petsc around it, > > > > I use a ILU preconditioner and the sparsity patterns between the calls to > not change, the values do, > > > > Thank you for any hint how to avoid the vecduplicate, > > > > Best regards > > > > Franz > > > > > > *Dr. Franz Pichler* > > Lead Researcher Area E > > > > > > *Virtual Vehicle Research GmbH* > > > > Inffeldgasse 21a, 8010 Graz, Austria > > Phone: +43 316 873 9818 > > franz.pichler at v2c2.at > > www.v2c2.at > > > > Firmenname: Virtual Vehicle Research GmbH > > Rechtsform: Gesellschaft mit beschr?nkter Haftung > > Firmensitz: Austria, 8010 Graz, Inffeldgasse 21/A > > Firmenbuchnummer: 224755y > > Firmenbuchgericht: Landesgericht f?r ZRS Graz > > UID: ATU54713500 > > > > > -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at mcmaster.ca Tue Jun 6 13:05:10 2023 From: bourdin at mcmaster.ca (Blaise Bourdin) Date: Tue, 6 Jun 2023 18:05:10 +0000 Subject: [petsc-users] MatSetSizes: C vs python Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testL2G2.c Type: application/octet-stream Size: 738 bytes Desc: testL2G2.c URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testL2G2.py Type: text/x-python-script Size: 470 bytes Desc: testL2G2.py URL: From stefano.zampini at gmail.com Tue Jun 6 13:35:42 2023 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Tue, 6 Jun 2023 13:35:42 -0500 Subject: [petsc-users] MatSetSizes: C vs python In-Reply-To: References: Message-ID: The right call in petsc4py is P.setSizes(((nrowsLoc,PETSc.DECIDE),(ncolsLoc,PETSc.DECIDE)),1) https://petsc.org/main/petsc4py/reference/petsc4py.PETSc.Mat.html#petsc4py.PETSc.Mat.setSizes Il giorno mar 6 giu 2023 alle ore 13:05 Blaise Bourdin ha scritto: > Hi, > > Does anybody understand why MatSetSizes seem to behave differently in C > and python? > > I would expect the attached examples to be strictly equivalent but the > python version fails in parallel. It may be that the python interface is > different, but I don?t see any mention of this in the python docs. > > Regards, > Blaise > > > SiBookPro:test (master)$ mpirun -np 2 python3 testL2G2.py nrowsLoc: 10 > ncolsLoc: 20 > Traceback (most recent call last): > File "/Users/blaise/Development/ccG_CR/test/testL2G2.py", line 20, in > > nrowsLoc: 11 ncolsLoc: 21 > Traceback (most recent call last): > File "/Users/blaise/Development/ccG_CR/test/testL2G2.py", line 20, in > > sys.exit(main()) > sys.exit(main()) > ^^^^^^ > File "/Users/blaise/Development/ccG_CR/test/testL2G2.py", line 12, in > main > ^^^^^^ > File "/Users/blaise/Development/ccG_CR/test/testL2G2.py", line 12, in > main > P.setSizes([nrowsLoc,ncolsLoc],1) > P.setSizes([nrowsLoc,ncolsLoc],1) > File "petsc4py/PETSc/Mat.pyx", line 323, in petsc4py.PETSc.Mat.setSizes > petsc4py.PETSc.Error: error code 62 > [1] MatSetSizes() at /opt/HPC/petsc-release/src/mat/utils/gcreate.c:161 > [1] Invalid argument > [1] Int value must be same on all processes, argument # 4 > File "petsc4py/PETSc/Mat.pyx", line 323, in petsc4py.PETSc.Mat.setSizes > petsc4py.PETSc.Error: error code 62 > [0] MatSetSizes() at /opt/HPC/petsc-release/src/mat/utils/gcreate.c:161 > [0] Invalid argument > [0] Int value must be same on all processes, argument # 4 > > > > > ? > Canada Research Chair in Mathematical and Computational Aspects of Solid > Mechanics (Tier 1) > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243 > > -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From franz.pichler at v2c2.at Wed Jun 7 04:11:40 2023 From: franz.pichler at v2c2.at (Pichler, Franz) Date: Wed, 7 Jun 2023 09:11:40 +0000 Subject: [petsc-users] Petsc using VecDuplicate in solution process In-Reply-To: References: Message-ID: Hello thanks for the reply, I created a working minimal example (as minimal as I can think of?) that I include here, even though I am not sure which is the best format to do this, I just add some plain text: //########################################################################################## #include #include #include #include #include #include #include /*I "petscksp.h" I*/ #include #include #include #include class petsc_solver{ Vec petsc_x, petsc_b; Mat petsc_A; KSP petsc_ksp; PC petsc_pc; int linear_iter; KSPConvergedReason reason; bool first_time; int n_rows; int number_of_pc_rebuilds=0; public: petsc_solver() { KSPCreate(PETSC_COMM_WORLD, &petsc_ksp); KSPSetFromOptions(petsc_ksp); KSPGetPC(petsc_ksp, &petsc_pc); KSPSetType(petsc_ksp, KSPBCGS); PCSetType(petsc_pc, PCILU); PCFactorSetLevels(petsc_pc, 0); KSPSetInitialGuessNonzero(petsc_ksp, PETSC_TRUE); KSPSetTolerances(petsc_ksp, 1.e-12, 1.e-8, 1e14,1000); } void set_matrix(std::vector& dsp,std::vector& col,std::vector& val){ int *ptr_dsp = dsp.data(); int *ptr_col = col.data(); double *ptr_ele = val.data(); n_rows=dsp.size()-1; std::cout<<"set petsc matrix, n_rows:"<& val){ double *ptr_ele = val.data(); VecCreateSeqWithArray(PETSC_COMM_WORLD, 1, n_rows, NULL,&petsc_b); VecPlaceArray(petsc_b, ptr_ele); } void set_sol(std::vector& val){ double *ptr_ele = val.data(); VecCreateSeqWithArray(PETSC_COMM_WORLD, 1, n_rows, NULL,&petsc_x); VecPlaceArray(petsc_x, ptr_ele); } int solve(bool force_rebuild) { int solver_stat = 0; KSPGetPC(petsc_ksp, &petsc_pc); int ierr; // ierr = PetscObjectStateIncrease((PetscObject)petsc_A); // ierr = PetscObjectStateIncrease((PetscObject)petsc_b); MatAssemblyBegin(petsc_A,MAT_FINAL_ASSEMBLY); MatAssemblyEnd(petsc_A,MAT_FINAL_ASSEMBLY); VecAssemblyBegin(petsc_b); VecAssemblyEnd(petsc_b); // KSPSetOperators(petsc_ksp, petsc_A, petsc_A); ierr = KSPSolve(petsc_ksp, petsc_b, petsc_x); KSPGetConvergedReason(petsc_ksp, &reason); if (reason < 0){ KSPGetIterationNumber(petsc_ksp, &linear_iter); std::cout<<"NOT CONVERGED!\n"; // PetscPrintf(PETSC_COMM_WORLD,"KSPConvergedReason _aux: %D PCREUSE: %D (%D=False %D=True) IERR:%i ITERS:%i\n",reason, reuse, PETSC_FALSE, PETSC_TRUE, ierr,linear_iter); return -1; } KSPGetIterationNumber(petsc_ksp, &linear_iter); return linear_iter; } }; void change_rhs(int i, int n_rows,std::vector&rhs){ for(int row=0;row& vals){ int nnz = n_rows*3-2; for(int row=0;row& dsp,std::vector& col,std::vector& val ){ int nnz = n_rows*3-2; std::cout<<"SETCRS ROWS:"<& dsp,std::vector& col,std::vector& val ,std::vector& sol,std::vector rhs){ int n_rows=dsp.size()-1; double res=0; for (int row=0;row rhs(n_rows); std::vector sol(n_rows); std::vector dsp; std::vector cols; std::vector vals; set_crs_structure(n_rows,dsp,cols,vals); PetscInitializeNoArguments(); petsc_solver p; p.set_matrix(dsp,cols,vals); p.set_rhs(rhs); p.set_sol(sol); for (int i=0;i<100;i++){ change_rhs(i,n_rows,rhs); change_matrix(i,n_rows,vals); // std::cout<<"RES BEFORE:"< Sent: Tuesday, June 6, 2023 4:40 PM To: Pichler, Franz Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Petsc using VecDuplicate in solution process Il giorno mar 6 giu 2023 alle ore 09:24 Pichler, Franz > ha scritto: Hello, I was just investigating my KSP_Solve_BCGS Routine with algrandcallgrind, I see there that petsc is using a vecduplicate (onvolvin malloc and copy) every time it is called. Do you mean KSPSolve_BCGS? There's only one VecDuplicate in there and it is called only once. An example code showing the problem would help I call it several thousand times (time evolution problem with rather small matrices) I am not quite sure which vector is copied there but I guess is the initial guess or the rhs, Is there a tool in petsc to avoid any vecduplication by providing a fixed memory for this vector? Some corner facts of my routine: I assemble the matrices(crs,serial) and vectors myself and then use MatCreateSeqAIJWithArrays and VecCreateSeqWithArray To wrap petsc around it, I use a ILU preconditioner and the sparsity patterns between the calls to not change, the values do, Thank you for any hint how to avoid the vecduplicate, Best regards Franz Dr. Franz Pichler Lead Researcher Area E Virtual Vehicle Research GmbH Inffeldgasse 21a, 8010 Graz, Austria Phone: +43 316 873 9818 franz.pichler at v2c2.at www.v2c2.at Firmenname: Virtual Vehicle Research GmbH Rechtsform: Gesellschaft mit beschr?nkter Haftung Firmensitz: Austria, 8010 Graz, Inffeldgasse 21/A Firmenbuchnummer: 224755y Firmenbuchgericht: Landesgericht f?r ZRS Graz UID: ATU54713500 -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jun 7 08:10:01 2023 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 7 Jun 2023 08:10:01 -0500 Subject: [petsc-users] Petsc using VecDuplicate in solution process In-Reply-To: References: Message-ID: <8CBC3AC2-71DF-48C0-BA51-F8F5C296BD8E@petsc.dev> Can you please present the all output that callgrind is outputing to you that provides this indication. > On Jun 7, 2023, at 4:11 AM, Pichler, Franz wrote: > > Hello thanks for the reply, > > I created a working minimal example (as minimal as I can think of?) that I include here, even though I am not sure which is the best format to do this, I just add some plain text: > //########################################################################################## > #include > #include > #include > #include > #include > #include > #include /*I "petscksp.h" I*/ > #include > #include > #include > #include > > class petsc_solver{ > Vec petsc_x, petsc_b; > Mat petsc_A; > KSP petsc_ksp; > PC petsc_pc; > int linear_iter; > KSPConvergedReason reason; > bool first_time; > int n_rows; > int number_of_pc_rebuilds=0; > public: > petsc_solver() { > KSPCreate(PETSC_COMM_WORLD, &petsc_ksp); > KSPSetFromOptions(petsc_ksp); > KSPGetPC(petsc_ksp, &petsc_pc); > KSPSetType(petsc_ksp, KSPBCGS); > PCSetType(petsc_pc, PCILU); > PCFactorSetLevels(petsc_pc, 0); > KSPSetInitialGuessNonzero(petsc_ksp, PETSC_TRUE); > KSPSetTolerances(petsc_ksp, 1.e-12, 1.e-8, 1e14,1000); > } > void set_matrix(std::vector& dsp,std::vector& col,std::vector& val){ > int *ptr_dsp = dsp.data(); > int *ptr_col = col.data(); > double *ptr_ele = val.data(); > n_rows=dsp.size()-1; > std::cout<<"set petsc matrix, n_rows:"< MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD, n_rows,n_rows, ptr_dsp, ptr_col, ptr_ele,&petsc_A); > MatSetOption(petsc_A, MAT_NEW_NONZERO_LOCATIONS, PETSC_FALSE); > MatSetOption(petsc_A, MAT_NO_OFF_PROC_ZERO_ROWS, PETSC_TRUE); > KSPSetOperators(petsc_ksp, petsc_A, petsc_A); > } > void set_rhs(std::vector& val){ > double *ptr_ele = val.data(); > VecCreateSeqWithArray(PETSC_COMM_WORLD, 1, n_rows, NULL,&petsc_b); > VecPlaceArray(petsc_b, ptr_ele); > } > void set_sol(std::vector& val){ > double *ptr_ele = val.data(); > VecCreateSeqWithArray(PETSC_COMM_WORLD, 1, n_rows, NULL,&petsc_x); > VecPlaceArray(petsc_x, ptr_ele); > } > > int solve(bool force_rebuild) { > int solver_stat = 0; > KSPGetPC(petsc_ksp, &petsc_pc); > int ierr; > // ierr = PetscObjectStateIncrease((PetscObject)petsc_A); > // ierr = PetscObjectStateIncrease((PetscObject)petsc_b); > MatAssemblyBegin(petsc_A,MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(petsc_A,MAT_FINAL_ASSEMBLY); > VecAssemblyBegin(petsc_b); > VecAssemblyEnd(petsc_b); > > // KSPSetOperators(petsc_ksp, petsc_A, petsc_A); > ierr = KSPSolve(petsc_ksp, petsc_b, petsc_x); > KSPGetConvergedReason(petsc_ksp, &reason); > if (reason < 0){ > KSPGetIterationNumber(petsc_ksp, &linear_iter); > std::cout<<"NOT CONVERGED!\n"; > // PetscPrintf(PETSC_COMM_WORLD,"KSPConvergedReason _aux: %D PCREUSE: %D (%D=False %D=True) IERR:%i ITERS:%i\n",reason, reuse, PETSC_FALSE, PETSC_TRUE, ierr,linear_iter); > return -1; > } > KSPGetIterationNumber(petsc_ksp, &linear_iter); > return linear_iter; > } > }; > void change_rhs(int i, int n_rows,std::vector&rhs){ > for(int row=0;row } > void change_matrix(int i, int n_rows,std::vector& vals){ > int nnz = n_rows*3-2; > for(int row=0;row if(row==0) { > vals[0]=3+cos(i+row);//pseduo random something > vals[1]=-1+0.3*cos(i+row);//pseduo random something > }else if(row==n_rows-1){ > vals[nnz-1]=3+cos(i+row);//pseduo random something > vals[nnz-2]=-1+0.3*cos(i+row);//pseduo random something > }else{ > vals[2+(row-1)*3] =-1+0.1*cos(i+row);//pseduo random something > vals[2+(row-1)*3+1] = 4+0.3*cos(i+row);//pseduo random something > vals[3+(row-1)*3+2] =-1+0.2*cos(i+row);//pseduo random something > } > } > } > void set_crs_structure(int n_rows,std::vector& dsp,std::vector& col,std::vector& val ){ > int nnz = n_rows*3-2; > std::cout<<"SETCRS ROWS:"< dsp.resize(n_rows+1); > col.resize(nnz); > val.resize(nnz); > for(int row=0;row if(row==0) { > col[0]=0; > col[1]=1; > dsp[row+1]=dsp[row]+2; > }else if(row==n_rows-1){ > col[2+(row-1)*3+0]=row-1; > col[2+(row-1)*3+1]=row; > dsp[row+1]=dsp[row]+2; > } > else{ > dsp[row+1]=dsp[row]+3; > col[2+(row-1)*3+0]=row-1; > col[2+(row-1)*3+1]=row; > col[2+(row-1)*3+2]=row+1; > } > } > } > double check_res(std::vector& dsp,std::vector& col,std::vector& val ,std::vector& sol,std::vector rhs){ > int n_rows=dsp.size()-1; > double res=0; > for (int row=0;row for(int entry=dsp[row];entry int c=col[entry]; > rhs[row]-=val[entry]*sol[c]; > } > res+=rhs[row]*rhs[row]; > } > return sqrt(res); > } > int main(int argc, char **argv) { > > int n_rows = 20; > std::vector rhs(n_rows); > std::vector sol(n_rows); > std::vector dsp; > std::vector cols; > std::vector vals; > set_crs_structure(n_rows,dsp,cols,vals); > PetscInitializeNoArguments(); > petsc_solver p; > p.set_matrix(dsp,cols,vals); > p.set_rhs(rhs); > p.set_sol(sol); > for (int i=0;i<100;i++){ > change_rhs(i,n_rows,rhs); > change_matrix(i,n_rows,vals); > // std::cout<<"RES BEFORE:"< int iter = p.solve(false); > std::cout<<"SOL:"< } > PetscFinalize(); > return -1; > } > //########################################################################################## > > This is a full working minimal example > When I callgrind this, it shows me that the vecduplicate is called as often as the solve process itself, > I hope this clarifies my issue, > > Best regards, > > Franz > > From: Stefano Zampini > > Sent: Tuesday, June 6, 2023 4:40 PM > To: Pichler, Franz > > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Petsc using VecDuplicate in solution process > > > > Il giorno mar 6 giu 2023 alle ore 09:24 Pichler, Franz > ha scritto: > Hello, > I was just investigating my KSP_Solve_BCGS Routine with algrandcallgrind, > I see there that petsc is using a vecduplicate (onvolvin malloc and copy) every time it is called. > > Do you mean KSPSolve_BCGS? > > There's only one VecDuplicate in there and it is called only once. An example code showing the problem would help > > > > I call it several thousand times (time evolution problem with rather small matrices) > > I am not quite sure which vector is copied there but I guess is the initial guess or the rhs, > Is there a tool in petsc to avoid any vecduplication by providing a fixed memory for this vector? > > Some corner facts of my routine: > I assemble the matrices(crs,serial) and vectors myself and then use > MatCreateSeqAIJWithArrays and VecCreateSeqWithArray > To wrap petsc around it, > > I use a ILU preconditioner and the sparsity patterns between the calls to not change, the values do, > > Thank you for any hint how to avoid the vecduplicate, > > Best regards > > Franz > > > Dr. Franz Pichler > Lead Researcher Area E > > > Virtual Vehicle Research GmbH > > Inffeldgasse 21a, 8010 Graz, Austria > Phone: +43 316 873 9818 > franz.pichler at v2c2.at > www.v2c2.at > > Firmenname: Virtual Vehicle Research GmbH > Rechtsform: Gesellschaft mit beschr?nkter Haftung > Firmensitz: Austria, 8010 Graz, Inffeldgasse 21/A > Firmenbuchnummer: 224755y > Firmenbuchgericht: Landesgericht f?r ZRS Graz > UID: ATU54713500 > > > > > -- > Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalle.karhapaa at tuni.fi Wed Jun 7 06:07:22 2023 From: kalle.karhapaa at tuni.fi (=?iso-8859-1?Q?Kalle_Karhap=E4=E4_=28TAU=29?=) Date: Wed, 7 Jun 2023 11:07:22 +0000 Subject: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT Message-ID: Hi! I am using petsc in a topology optimization project with sparselizard and ipopt. I am hoping to use mpich to run sparselizard/ipopt calculations faster, but I'm getting the following error straight away: vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ mpiexec -np 2 ./simulations/default/default 1e2 [proxy:0:0 at WKS-101259-LT] HYD_pmcd_pmi_parse_pmi_cmd (pm/pmiserv/common.c:57): [proxy:0:0 at WKS-101259-LT] handle_pmi_cmd (pm/pmiserv/pmip_cb.c:115): unable to parse PMI command [proxy:0:0 at WKS-101259-LT] pmi_cb (pm/pmiserv/pmip_cb.c:362): unable to handle PMI command [proxy:0:0 at WKS-101259-LT] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status [proxy:0:0 at WKS-101259-LT] main (pm/pmiserv/pmip.c:169): demux engine error waiting for event the problem persists with different numbers of cores -np 1...10. Sometimes after the previous message there is the bonus error: Fatal error in internal_Init: Other MPI error, error stack: internal_Init(66): MPI_Init(argc=(nil), argv=(nil)) failed internal_Init(46): Cannot call MPI_INIT or MPI_INIT_THREAD more than once In petsc configuration I am downloading mpich. Then I'm building the sparselizard project with the same mpich downloaded through petsc installation. here is my petsc conf: ./configure --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3'; petsc install went as follows: vrkaka at WKS-101259-LT:~/sparselizardipopt/install_external_libs$ ./install_petsc.sh mkdir: cannot create directory '/home/vrkaka/SLlibs': File exists __________________________________________ FETCHING THE LATEST PETSC VERSION FROM GIT Cloning into 'petsc'... remote: Enumerating objects: 1097079, done. remote: Counting objects: 100% (687/687), done. remote: Compressing objects: 100% (144/144), done. remote: Total 1097079 (delta 555), reused 664 (delta 539), pack-reused 1096392 Receiving objects: 100% (1097079/1097079), 344.72 MiB | 7.14 MiB/s, done. Resolving deltas: 100% (840415/840415), done. __________________________________________ CONFIGURING PETSC ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= ============================================================================================= Trying to download https://github.com/pmodels/mpich/releases/download/v4.1.1/mpich-4.1.1.tar.gz for MPICH ============================================================================================= ============================================================================================= Running configure on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Running make on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Running make install on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-sowing.git for SOWING ============================================================================================= ============================================================================================= Running configure on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running make on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running make install on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running arch-linux-c-opt/bin/bfort to generate Fortran stubs ============================================================================================= ============================================================================================= Trying to download http://www.zlib.net/zlib-1.2.13.tar.gz for ZLIB ============================================================================================= ============================================================================================= Building and installing zlib; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.2/src/hdf5-1.12.2.tar.bz2 for HDF5 ============================================================================================= ============================================================================================= Running configure on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Running make on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Running make install on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/parallel-netcdf/pnetcdf for PNETCDF ============================================================================================= ============================================================================================= Running libtoolize on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running autoreconf on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running configure on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make install on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/Unidata/netcdf-c/archive/v4.9.1.tar.gz for NETCDF ============================================================================================= ============================================================================================= Running configure on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make install on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-med.git for MED ============================================================================================= ============================================================================================= Configuring MED with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing MED; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/gsjaardema/seacas.git for EXODUSII ============================================================================================= ============================================================================================= Configuring EXODUSII with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing EXODUSII; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-metis.git for METIS ============================================================================================= ============================================================================================= Configuring METIS with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing METIS; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/xianyi/OpenBLAS.git for OPENBLAS ============================================================================================= ============================================================================================= Compiling OpenBLAS; this may take several minutes ============================================================================================= ============================================================================================= Installing OpenBLAS ============================================================================================= ============================================================================================= Trying to download https://github.com/Reference-ScaLAPACK/scalapack for SCALAPACK ============================================================================================= ============================================================================================= Configuring SCALAPACK with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing SCALAPACK; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://graal.ens-lyon.fr/MUMPS/MUMPS_5.6.0.tar.gz for MUMPS ============================================================================================= ============================================================================================= Compiling MUMPS; this may take several minutes ============================================================================================= ============================================================================================= Installing MUMPS; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://gitlab.com/slepc/slepc.git for SLEPC ============================================================================================= ============================================================================================= SLEPc examples are available at arch-linux-c-opt/externalpackages/git.slepc export SLEPC_DIR=arch-linux-c-opt ============================================================================================= Compilers: C Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 -fopenmp Version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 C++ Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -fopenmp Version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Fortran Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp Version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Linkers: Shared linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Dynamic linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Libraries linked against: BlasLapack: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads uses 4 byte integers MPI: Version: 4 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec Implementation: mpich4 MPICH_NUMVERSION: 40101300 MPICH: python: Executable: /usr/bin/python3 openmp: Version: 201511 pthread: cmake: Version: 3.22.1 Executable: /usr/bin/cmake openblas: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads zlib: Version: 1.2.13 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lz hdf5: Version: 1.12.2 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lhdf5_hl -lhdf5 netcdf: Version: 4.9.1 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lnetcdf pnetcdf: Version: 1.12.3 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lpnetcdf metis: Version: 5.1.0 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmetis slepc: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lslepc regex: MUMPS: Version: 5.6.0 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -ldmumps -lmumps_common -lpord -lpthread uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads scalapack: Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lscalapack exodusii: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lexoIIv2for32 -lexodus med: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmedC -lmed sowing: Version: 1.1.26 Executable: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/bfort PETSc: Language used to compile PETSc: C PETSC_ARCH: arch-linux-c-opt PETSC_DIR: /home/vrkaka/SLlibs/petsc Prefix: Scalar type: real Precision: double Support for __float128 Integer size: 4 bytes Single library: yes Shared libraries: yes Memory alignment from malloc(): 16 bytes Using GNU make: /usr/bin/gmake xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt all xxx=========================================================================xxx __________________________________________ COMPILING PETSC /usr/bin/python3 ./config/gmakegen.py --petsc-arch=arch-linux-c-opt /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegentest.py --petsc-dir=/home/vrkaka/SLlibs/petsc --petsc-arch=arch-linux-c-opt --testdir=./arch-linux-c-opt/tests make: '/home/vrkaka/SLlibs/petsc' is up to date. make: 'arch-linux-c-opt' is up to date. /home/vrkaka/SLlibs/petsc/lib/petsc/bin/petscnagupgrade.py:14: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.version import LooseVersion as Version ========================================== See documentation/faq.html and documentation/bugreporting.html for help with installation problems. Please send EVERYTHING printed out below when reporting problems. Please check the mailing list archives and consider subscribing. https://petsc.org/release/community/mailing/ ========================================== Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:19:10 +0300 Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using PETSc directory: /home/vrkaka/SLlibs/petsc Using PETSc arch: arch-linux-c-opt ----------------------------------------- PETSC_VERSION_RELEASE 0 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 19 PETSC_VERSION_SUBMINOR 2 PETSC_VERSION_DATE "unknown" PETSC_VERSION_GIT "unknown" PETSC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using configure Options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 Using configuration flags: #define PETSC_ARCH "arch-linux-c-opt" #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define PETSC_DIR_SEPARATOR '/' #define PETSC_FORTRAN_CHARLEN_T size_t #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_ATTRIBUTEALIGNED 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_ATOMIC 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_CXX_DIALECT_CXX20 1 #define PETSC_HAVE_DLADDR 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_EXECUTABLE_EXPORT 1 #define PETSC_HAVE_EXODUSII 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_HDF5 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MED 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_METIS 1 #define PETSC_HAVE_MKSTEMP 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_MPICH 1 #define PETSC_HAVE_MPICH_NUMVERSION 40101300 #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_LARGE_COUNT 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_MUMPS 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_NETCDF 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_OPENBLAS 1 #define PETSC_HAVE_OPENMP 1 #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" #define PETSC_HAVE_PNETCDF 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_POSIX_MEMALIGN 1 #define PETSC_HAVE_PTHREAD 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_HAVE_REGEX 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_SCALAPACK 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SLEPC 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SOWING 1 #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_STDATOMIC_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HDF5_HAVE_PARALLEL 1 #define PETSC_HDF5_HAVE_ZLIB 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "so" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED __attribute((unused)) #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MALLOC_COALESCED 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC_VERSION_BRANCH_GIT "main" #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 ----------------------------------------- Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 mpicc -show: gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi C compiler version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp mpicxx -show: g++ -Wno-lto-type-mismatch -Wno-psabi -O3 -std=gnu++20 -fPIC -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpicxx -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi C++ compiler version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp mpif90 -show: gfortran -fPIC -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -O3 -fallow-argument-mismatch -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpifort -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi Fortran compiler version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 ----------------------------------------- Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 ----------------------------------------- Using system modules: Using mpi.h: # 1 "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include/mpi.h" 1 ----------------------------------------- Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ ------------------------------------------ Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec ------------------------------------------ Using MAKE: /usr/bin/gmake Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc ========================================== /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= libs FC arch-linux-c-opt/obj/sys/fsrc/somefort.o CXX arch-linux-c-opt/obj/sys/dll/cxx/demangle.o FC arch-linux-c-opt/obj/sys/f90-src/fsrc/f90_fwrap.o CC arch-linux-c-opt/obj/sys/f90-custom/zsysf90.o FC arch-linux-c-opt/obj/sys/f90-mod/petscsysmod.o CC arch-linux-c-opt/obj/sys/dll/dlimpl.o CC arch-linux-c-opt/obj/sys/dll/dl.o CC arch-linux-c-opt/obj/sys/dll/ftn-auto/regf.o CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostcontext.o CC arch-linux-c-opt/obj/sys/ftn-custom/zsys.o CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostdevice.o CC arch-linux-c-opt/obj/sys/ftn-custom/zutils.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/global_dcontext.o CC arch-linux-c-opt/obj/sys/dll/reg.o CC arch-linux-c-opt/obj/sys/logging/xmlviewer.o CC arch-linux-c-opt/obj/sys/logging/utils/stack.o CC arch-linux-c-opt/obj/sys/logging/utils/classlog.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/device.o CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zpetscloghf.o CC arch-linux-c-opt/obj/sys/logging/utils/stagelog.o CC arch-linux-c-opt/obj/sys/logging/ftn-auto/xmllogeventf.o CC arch-linux-c-opt/obj/sys/logging/ftn-auto/plogf.o CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zplogf.o CC arch-linux-c-opt/obj/sys/logging/utils/eventlog.o CC arch-linux-c-opt/obj/sys/python/ftn-custom/zpythonf.o CC arch-linux-c-opt/obj/sys/utils/arch.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/memory.o CC arch-linux-c-opt/obj/sys/python/pythonsys.o CC arch-linux-c-opt/obj/sys/utils/fhost.o CC arch-linux-c-opt/obj/sys/utils/fuser.o CC arch-linux-c-opt/obj/sys/utils/matheq.o CC arch-linux-c-opt/obj/sys/utils/mathclose.o CC arch-linux-c-opt/obj/sys/utils/mathfit.o CC arch-linux-c-opt/obj/sys/utils/mathinf.o CC arch-linux-c-opt/obj/sys/utils/ctable.o CC arch-linux-c-opt/obj/sys/utils/memc.o CC arch-linux-c-opt/obj/sys/utils/mpilong.o CC arch-linux-c-opt/obj/sys/logging/xmllogevent.o CC arch-linux-c-opt/obj/sys/utils/mpitr.o CC arch-linux-c-opt/obj/sys/utils/mpishm.o CC arch-linux-c-opt/obj/sys/utils/pbarrier.o CC arch-linux-c-opt/obj/sys/utils/mpiu.o CC arch-linux-c-opt/obj/sys/utils/psleep.o CC arch-linux-c-opt/obj/sys/utils/pdisplay.o CC arch-linux-c-opt/obj/sys/utils/psplit.o CC arch-linux-c-opt/obj/sys/utils/segbuffer.o CC arch-linux-c-opt/obj/sys/utils/mpimesg.o CC arch-linux-c-opt/obj/sys/utils/sortd.o CC arch-linux-c-opt/obj/sys/utils/sseenabled.o CC arch-linux-c-opt/obj/sys/utils/sortip.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zarchf.o CC arch-linux-c-opt/obj/sys/utils/mpits.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zfhostf.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zsortsof.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zstrf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/memcf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpitsf.o CC arch-linux-c-opt/obj/sys/logging/plog.o CC arch-linux-c-opt/obj/sys/utils/str.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpiuf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psleepf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psplitf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortdf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortipf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortsof.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortif.o CC arch-linux-c-opt/obj/sys/totalview/tv_data_display.o CC arch-linux-c-opt/obj/sys/objects/gcomm.o CC arch-linux-c-opt/obj/sys/objects/gcookie.o CC arch-linux-c-opt/obj/sys/objects/fcallback.o CC arch-linux-c-opt/obj/sys/objects/destroy.o CC arch-linux-c-opt/obj/sys/objects/gtype.o CC arch-linux-c-opt/obj/sys/utils/sorti.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/dcontext.o CC arch-linux-c-opt/obj/sys/objects/olist.o CC arch-linux-c-opt/obj/sys/objects/garbage.o CC arch-linux-c-opt/obj/sys/objects/pgname.o CC arch-linux-c-opt/obj/sys/objects/package.o CC arch-linux-c-opt/obj/sys/objects/inherit.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/mark_dcontext.o CC arch-linux-c-opt/obj/sys/utils/sortso.o CC arch-linux-c-opt/obj/sys/objects/aoptions.o CC arch-linux-c-opt/obj/sys/objects/prefix.o CC arch-linux-c-opt/obj/sys/objects/init.o CC arch-linux-c-opt/obj/sys/objects/pname.o CC arch-linux-c-opt/obj/sys/objects/ptype.o CC arch-linux-c-opt/obj/sys/objects/state.o CC arch-linux-c-opt/obj/sys/objects/version.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/destroyf.o CC arch-linux-c-opt/obj/sys/objects/device/util/memory.o CC arch-linux-c-opt/obj/sys/objects/device/util/devicereg.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcommf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcookief.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/inheritf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/optionsf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/pinitf.o CC arch-linux-c-opt/obj/sys/objects/tagm.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/statef.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/subcommf.o CC arch-linux-c-opt/obj/sys/objects/subcomm.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/tagmf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgcommf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zdestroyf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgtype.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zinheritf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsyamlf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpackage.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpgnamef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpnamef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zprefixf.o CC arch-linux-c-opt/obj/sys/objects/pinit.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zptypef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstartf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zversionf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstart.o CC arch-linux-c-opt/obj/sys/memory/mhbw.o CC arch-linux-c-opt/obj/sys/memory/mem.o CC arch-linux-c-opt/obj/sys/memory/ftn-auto/memf.o CC arch-linux-c-opt/obj/sys/memory/ftn-custom/zmtrf.o CC arch-linux-c-opt/obj/sys/memory/mal.o CC arch-linux-c-opt/obj/sys/memory/ftn-auto/mtrf.o CC arch-linux-c-opt/obj/sys/perfstubs/pstimer.o CC arch-linux-c-opt/obj/sys/error/errabort.o CC arch-linux-c-opt/obj/sys/error/checkptr.o CC arch-linux-c-opt/obj/sys/error/errstop.o CC arch-linux-c-opt/obj/sys/error/pstack.o CC arch-linux-c-opt/obj/sys/error/adebug.o CC arch-linux-c-opt/obj/sys/error/errtrace.o CC arch-linux-c-opt/obj/sys/error/fp.o CC arch-linux-c-opt/obj/sys/memory/mtr.o CC arch-linux-c-opt/obj/sys/error/signal.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/adebugf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/checkptrf.o CC arch-linux-c-opt/obj/sys/objects/options.o CC arch-linux-c-opt/obj/sys/error/ftn-custom/zerrf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/errf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/fpf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/signalf.o CC arch-linux-c-opt/obj/sys/error/err.o CC arch-linux-c-opt/obj/sys/fileio/fpath.o CC arch-linux-c-opt/obj/sys/fileio/fdir.o CC arch-linux-c-opt/obj/sys/fileio/fwd.o CC arch-linux-c-opt/obj/sys/fileio/ghome.o CC arch-linux-c-opt/obj/sys/fileio/ftest.o CC arch-linux-c-opt/obj/sys/fileio/grpath.o CC arch-linux-c-opt/obj/sys/fileio/rpath.o CC arch-linux-c-opt/obj/sys/fileio/mpiuopen.o CC arch-linux-c-opt/obj/sys/fileio/smatlab.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmpiuopenf.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zghomef.o CC arch-linux-c-opt/obj/sys/fileio/fretrieve.o CC arch-linux-c-opt/obj/sys/fileio/ftn-auto/sysiof.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmprintf.o CC arch-linux-c-opt/obj/sys/info/ftn-auto/verboseinfof.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zsysiof.o CC arch-linux-c-opt/obj/sys/info/ftn-custom/zverboseinfof.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/axis.o CC arch-linux-c-opt/obj/sys/fileio/mprint.o CC arch-linux-c-opt/obj/sys/info/verboseinfo.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/bars.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/cmap.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/image.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/axisc.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/dscatter.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/lg.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/zoom.o CC arch-linux-c-opt/obj/sys/fileio/sysio.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zlgcf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/hists.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zzoomf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zaxisf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/axiscf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/barsf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/lgc.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/dscatterf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/histsf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dcoor.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dclear.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgcf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dellipse.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dflush.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpause.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dline.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmarker.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmouse.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpoint.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawregall.o CC arch-linux-c-opt/obj/sys/objects/optionsyaml.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drect.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawreg.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/draw.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtext.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawregf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtextf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dsave.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtrif.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtri.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dclearf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dcoorf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dviewp.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dellipsef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dflushf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmousef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmarkerf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dlinef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpausef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpointf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawregf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drectf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dsavef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtextf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtrif.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dviewpf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/ftn-auto/drawnullf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/drawnull.o CC arch-linux-c-opt/obj/sys/classes/random/interface/dlregisrand.o CC arch-linux-c-opt/obj/sys/classes/random/interface/random.o CC arch-linux-c-opt/obj/sys/classes/random/interface/randreg.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomcf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/tikz/tikz.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-custom/zrandomf.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomf.o CC arch-linux-c-opt/obj/sys/classes/random/interface/randomc.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rand48/rand48.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rand/rand.o CC arch-linux-c-opt/obj/sys/classes/bag/ftn-auto/bagf.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rander48/rander48.o CC arch-linux-c-opt/obj/sys/classes/bag/ftn-custom/zbagf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dupl.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/flush.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dlregispetsc.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewa.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewers.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewasetf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewregall.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/view.o CC arch-linux-c-opt/obj/sys/classes/bag/f90-custom/zbagf90.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewaf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/image/drawimage.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewregf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/ftn-auto/glvisf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-auto/drawvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-custom/zdrawvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-custom/zbinvf.o CC arch-linux-c-opt/obj/sys/classes/bag/bag.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-auto/binvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/f90-custom/zbinvf90.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewreg.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/ftn-custom/zsendf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-auto/hdf5vf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/ftn-custom/zstringvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/stringv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-custom/zhdf5f.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/drawv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/send.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/ftn-custom/zvtkvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/glvis.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vu/petscvu.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/vtkv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zvcreatef.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/filevf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/vcreateaf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/vcreatea.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zfilevf.o CC arch-linux-c-opt/obj/sys/time/cputime.o CC arch-linux-c-opt/obj/sys/time/fdate.o CC arch-linux-c-opt/obj/sys/time/ftn-auto/cputimef.o CC arch-linux-c-opt/obj/sys/time/ftn-custom/zptimef.o CC arch-linux-c-opt/obj/sys/f90-src/f90_cwrap.o CC arch-linux-c-opt/obj/vec/pf/interface/pfall.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/hdf5v.o CC arch-linux-c-opt/obj/vec/pf/interface/ftn-custom/zpff.o CC arch-linux-c-opt/obj/vec/pf/interface/ftn-auto/pff.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/binv.o CC arch-linux-c-opt/obj/vec/pf/impls/constant/const.o CC arch-linux-c-opt/obj/vec/pf/interface/pf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/filev.o CC arch-linux-c-opt/obj/vec/pf/impls/string/cstring.o CC arch-linux-c-opt/obj/vec/is/utils/isio.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zhdf5io.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zisltogf.o CC arch-linux-c-opt/obj/vec/is/utils/pmap.o CC arch-linux-c-opt/obj/vec/is/utils/hdf5io.o CC arch-linux-c-opt/obj/vec/is/utils/f90-custom/zisltogf90.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zvsectionisf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/isltogf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/pmapf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/psortf.o CC arch-linux-c-opt/obj/vec/is/is/utils/f90-custom/ziscoloringf90.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-custom/ziscoloringf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isblockf.o CC arch-linux-c-opt/obj/vec/is/is/utils/iscomp.o CC arch-linux-c-opt/obj/vec/is/utils/psort.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscompf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscoloringf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isdifff.o CC arch-linux-c-opt/obj/vec/is/is/utils/isblock.o CC arch-linux-c-opt/obj/vec/is/is/interface/isreg.o CC arch-linux-c-opt/obj/vec/is/is/interface/isregall.o CC arch-linux-c-opt/obj/vec/is/is/interface/f90-custom/zindexf90.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/indexf.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-custom/zindexf.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/isregf.o CC arch-linux-c-opt/obj/vec/is/is/impls/stride/ftn-auto/stridef.o CC arch-linux-c-opt/obj/vec/is/is/utils/isdiff.o CC arch-linux-c-opt/obj/vec/is/is/utils/iscoloring.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-custom/zblockf.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-auto/blockf.o FC arch-linux-c-opt/obj/vec/f90-mod/petscvecmod.o CC arch-linux-c-opt/obj/vec/is/is/impls/f90-custom/zblockf90.o CC arch-linux-c-opt/obj/vec/is/is/impls/stride/stride.o CC arch-linux-c-opt/obj/vec/is/is/impls/general/ftn-auto/generalf.o CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-custom/zsectionf.o CC arch-linux-c-opt/obj/vec/is/section/interface/f90-custom/zvsectionisf90.o CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-auto/sectionf.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/block.o CC arch-linux-c-opt/obj/vec/is/ao/interface/aoreg.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ao.o CC arch-linux-c-opt/obj/vec/is/ao/interface/aoregall.o CC arch-linux-c-opt/obj/vec/is/ao/interface/dlregisdm.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-auto/aof.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-custom/zaof.o CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/ftn-custom/zaobasicf.o CC arch-linux-c-opt/obj/vec/is/section/interface/sectionhdf5.o CC arch-linux-c-opt/obj/vec/is/is/impls/general/general.o CC arch-linux-c-opt/obj/vec/is/utils/isltog.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-auto/aomappingf.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-custom/zaomappingf.o CC arch-linux-c-opt/obj/vec/is/is/interface/index.o CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/aobasic.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-custom/zsfutilsf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfcoordf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/f90-custom/zsfutilsf90.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/aomapping.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfutilsf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/sfcoord.o CC arch-linux-c-opt/obj/vec/is/sf/interface/dlregissf.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sfregi.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zsf.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zvscat.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sftype.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/sff.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/vscatf.o CC arch-linux-c-opt/obj/vec/is/ao/impls/memscalable/aomemscalable.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gather/sfgather.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gatherv/sfgatherv.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfmpi.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/alltoall/sfalltoall.o CC arch-linux-c-opt/obj/vec/is/sf/utils/sfutils.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgather/sfallgather.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfbasic.o CC arch-linux-c-opt/obj/vec/is/sf/interface/vscat.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/neighbor/sfneighbor.o CC arch-linux-c-opt/obj/vec/vec/utils/vecglvis.o CC arch-linux-c-opt/obj/vec/is/section/interface/section.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgatherv/sfallgatherv.o CC arch-linux-c-opt/obj/vec/vec/utils/vecio.o CC arch-linux-c-opt/obj/vec/vec/utils/vecs.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/dlregistagger.o CC arch-linux-c-opt/obj/vec/vec/utils/comb.o CC arch-linux-c-opt/obj/vec/is/sf/impls/window/sfwindow.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/tagger.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/taggerregi.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/ftn-auto/taggerf.o CC arch-linux-c-opt/obj/vec/vec/utils/vsection.o CC arch-linux-c-opt/obj/vec/vec/utils/projection.o CC arch-linux-c-opt/obj/vec/vec/utils/vecstash.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/absolute.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sf.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/and.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/andor.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/or.o CC arch-linux-c-opt/obj/vec/vec/utils/f90-custom/zvsectionf90.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/relative.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/simple.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/combf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/projectionf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/veciof.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vsectionf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vinvf.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/cdf.o CC arch-linux-c-opt/obj/vec/vec/interface/veccreate.o CC arch-linux-c-opt/obj/vec/vec/interface/vecregall.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvecregf.o CC arch-linux-c-opt/obj/vec/vec/interface/dlregisvec.o CC arch-linux-c-opt/obj/vec/vec/interface/vecreg.o CC arch-linux-c-opt/obj/vec/vec/interface/f90-custom/zvectorf90.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/veccreatef.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/rvectorf.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/vectorf.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvectorf.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec3.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec1.o CC arch-linux-c-opt/obj/vec/vec/utils/vinv.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/vseqcr.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-custom/zbvec2f.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-auto/vseqcrf.o CC arch-linux-c-opt/obj/vec/vec/impls/shared/ftn-auto/shvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/shared/shvec.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-custom/zvecnestf.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-auto/vecnestf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/commonmpvec.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/dvec2.o CC arch-linux-c-opt/obj/vec/vec/interface/vector.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/vmpicr.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pvec2.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec2.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-custom/zpbvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/commonmpvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/vmpicrf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/pbvecf.o CC arch-linux-c-opt/obj/mat/coarsen/scoarsen.o CC arch-linux-c-opt/obj/mat/coarsen/ftn-auto/coarsenf.o CC arch-linux-c-opt/obj/mat/coarsen/ftn-custom/zcoarsenf.o CC arch-linux-c-opt/obj/vec/vec/interface/rvector.o CC arch-linux-c-opt/obj/mat/coarsen/coarsen.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pbvec.o CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/ftn-auto/miskf.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/vecnest.o CC arch-linux-c-opt/obj/mat/color/utils/bipartite.o FC arch-linux-c-opt/obj/mat/f90-mod/petscmatmod.o CC arch-linux-c-opt/obj/mat/color/utils/valid.o CC arch-linux-c-opt/obj/mat/coarsen/impls/mis/mis.o CC arch-linux-c-opt/obj/mat/color/interface/matcoloring.o CC arch-linux-c-opt/obj/mat/color/interface/matcoloringregi.o CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/misk.o CC arch-linux-c-opt/obj/mat/color/interface/ftn-custom/zmatcoloringf.o CC arch-linux-c-opt/obj/mat/color/interface/ftn-auto/matcoloringf.o CC arch-linux-c-opt/obj/mat/color/utils/weights.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/degr.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/numsrt.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/dsm.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pdvec.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/ido.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/seq.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/setr.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/slo.o CC arch-linux-c-opt/obj/mat/color/impls/power/power.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/color.o CC arch-linux-c-opt/obj/mat/color/impls/natural/natural.o CC arch-linux-c-opt/obj/mat/utils/bandwidth.o CC arch-linux-c-opt/obj/mat/utils/compressedrow.o CC arch-linux-c-opt/obj/mat/utils/convert.o CC arch-linux-c-opt/obj/mat/utils/freespace.o CC arch-linux-c-opt/obj/mat/coarsen/impls/hem/hem.o CC arch-linux-c-opt/obj/mat/utils/getcolv.o CC arch-linux-c-opt/obj/mat/utils/matio.o CC arch-linux-c-opt/obj/mat/utils/matstashspace.o CC arch-linux-c-opt/obj/mat/utils/axpy.o CC arch-linux-c-opt/obj/mat/color/impls/jp/jp.o CC arch-linux-c-opt/obj/mat/utils/pheap.o CC arch-linux-c-opt/obj/mat/utils/gcreate.o CC arch-linux-c-opt/obj/mat/utils/veccreatematdense.o CC arch-linux-c-opt/obj/mat/utils/overlapsplit.o CC arch-linux-c-opt/obj/mat/utils/zerodiag.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/axpyf.o CC arch-linux-c-opt/obj/mat/utils/multequal.o CC arch-linux-c-opt/obj/mat/utils/zerorows.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/bandwidthf.o CC arch-linux-c-opt/obj/mat/color/impls/greedy/greedy.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/gcreatef.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/getcolvf.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/multequalf.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/zerodiagf.o CC arch-linux-c-opt/obj/mat/order/degree.o CC arch-linux-c-opt/obj/mat/order/fn1wd.o CC arch-linux-c-opt/obj/mat/order/fndsep.o CC arch-linux-c-opt/obj/mat/order/fnroot.o CC arch-linux-c-opt/obj/mat/order/gen1wd.o CC arch-linux-c-opt/obj/mat/order/gennd.o CC arch-linux-c-opt/obj/mat/order/genrcm.o CC arch-linux-c-opt/obj/mat/order/genqmd.o CC arch-linux-c-opt/obj/mat/order/qmdqt.o CC arch-linux-c-opt/obj/mat/order/qmdmrg.o CC arch-linux-c-opt/obj/mat/order/qmdrch.o CC arch-linux-c-opt/obj/mat/utils/matstash.o CC arch-linux-c-opt/obj/mat/order/qmdupd.o CC arch-linux-c-opt/obj/mat/order/rcm.o CC arch-linux-c-opt/obj/mat/order/rootls.o CC arch-linux-c-opt/obj/mat/order/sp1wd.o CC arch-linux-c-opt/obj/mat/order/spnd.o CC arch-linux-c-opt/obj/mat/order/spqmd.o CC arch-linux-c-opt/obj/mat/order/sprcm.o CC arch-linux-c-opt/obj/mat/order/wbm.o CC arch-linux-c-opt/obj/mat/order/sregis.o CC arch-linux-c-opt/obj/mat/order/ftn-custom/zsorderf.o CC arch-linux-c-opt/obj/mat/order/sorder.o CC arch-linux-c-opt/obj/mat/order/ftn-auto/spectralf.o CC arch-linux-c-opt/obj/mat/order/spectral.o CC arch-linux-c-opt/obj/mat/order/metisnd/metisnd.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatnullf.o CC arch-linux-c-opt/obj/mat/interface/matregis.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatregf.o CC arch-linux-c-opt/obj/mat/interface/matreg.o CC arch-linux-c-opt/obj/mat/interface/matnull.o CC arch-linux-c-opt/obj/mat/interface/dlregismat.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matnullf.o CC arch-linux-c-opt/obj/mat/interface/f90-custom/zmatrixf90.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matproductf.o CC arch-linux-c-opt/obj/mat/ftn-custom/zmat.o CC arch-linux-c-opt/obj/mat/matfd/ftn-custom/zfdmatrixf.o CC arch-linux-c-opt/obj/mat/matfd/ftn-auto/fdmatrixf.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matrixf.o CC arch-linux-c-opt/obj/mat/interface/matproduct.o CC arch-linux-c-opt/obj/mat/impls/transpose/transm.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatrixf.o CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/htransmf.o CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/transmf.o CC arch-linux-c-opt/obj/mat/impls/transpose/htransm.o CC arch-linux-c-opt/obj/mat/matfd/fdmatrix.o CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmf.o CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmhf.o CC arch-linux-c-opt/obj/mat/impls/python/ftn-custom/zpythonmf.o CC arch-linux-c-opt/obj/mat/impls/python/pythonmat.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/fdsell.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/ftn-custom/zsellf.o CC arch-linux-c-opt/obj/mat/impls/normal/normmh.o CC arch-linux-c-opt/obj/mat/impls/normal/normm.o CC arch-linux-c-opt/obj/mat/impls/is/ftn-auto/matisf.o CC arch-linux-c-opt/obj/mat/impls/shell/ftn-auto/shellf.o CC arch-linux-c-opt/obj/mat/impls/shell/ftn-custom/zshellf.o CC arch-linux-c-opt/obj/mat/impls/shell/shellcnv.o CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mmsell.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/aijsbaij.o CC arch-linux-c-opt/obj/mat/impls/shell/shell.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/sell.o CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mpisell.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact10.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact3.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact11.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact12.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij2.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact4.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact5.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact6.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact7.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-custom/zsbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sro.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact8.o CC arch-linux-c-opt/obj/mat/impls/is/matis.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-auto/sbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-custom/zmpisbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact9.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpiaijsbaij.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-auto/mpisbaijf.o CC arch-linux-c-opt/obj/mat/impls/kaij/ftn-auto/kaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij.o CC arch-linux-c-opt/obj/mat/interface/matrix.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-custom/zmpiadjf.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-auto/mpiadjf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mmsbaij.o CC arch-linux-c-opt/obj/mat/impls/diagonal/ftn-auto/diagonalf.o CC arch-linux-c-opt/obj/mat/impls/scalapack/ftn-auto/matscalapackf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/sbaijov.o CC arch-linux-c-opt/obj/mat/impls/lrc/ftn-auto/lrcf.o CC arch-linux-c-opt/obj/mat/impls/diagonal/diagonal.o CC arch-linux-c-opt/obj/mat/impls/lrc/lrc.o CC arch-linux-c-opt/obj/mat/impls/fft/ftn-custom/zfftf.o CC arch-linux-c-opt/obj/mat/impls/fft/fft.o CC arch-linux-c-opt/obj/mat/impls/dummy/matdummy.o CC arch-linux-c-opt/obj/mat/impls/submat/ftn-auto/submatf.o CC arch-linux-c-opt/obj/mat/impls/cdiagonal/ftn-auto/cdiagonalf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact2.o CC arch-linux-c-opt/obj/mat/impls/submat/submat.o CC arch-linux-c-opt/obj/mat/impls/cdiagonal/cdiagonal.o CC arch-linux-c-opt/obj/mat/impls/maij/ftn-auto/maijf.o CC arch-linux-c-opt/obj/mat/impls/composite/ftn-auto/mcompositef.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/mpiadj.o CC arch-linux-c-opt/obj/mat/impls/nest/ftn-custom/zmatnestf.o CC arch-linux-c-opt/obj/mat/impls/nest/ftn-auto/matnestf.o CC arch-linux-c-opt/obj/mat/impls/kaij/kaij.o CC arch-linux-c-opt/obj/mat/impls/composite/mcomposite.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijhdf5.o CC arch-linux-c-opt/obj/mat/impls/scalapack/matscalapack.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode2.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/fdaij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matptap.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matrart.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/mattransposematmult.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpisbaij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/symtranspose.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-custom/zaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-auto/aijf.o CC arch-linux-c-opt/obj/mat/impls/nest/matnest.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/basfactor.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijsell/aijsell.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/crl/crl.o CC arch-linux-c-opt/obj/mat/impls/maij/maij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijfact.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijperm/aijperm.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpb_aij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaijpc.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/spbas.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimattransposematmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mmaij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/fdmpiaij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/ftn-auto/mumpsf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijsell/mpiaijsell.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-auto/mpiaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijperm/mpiaijperm.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-custom/zmpiaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/crl/mcrl.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-custom/zdensef.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/densehdf5.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-auto/densef.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aij.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mmdense.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-custom/zmpidensef.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-auto/mpidensef.o CC arch-linux-c-opt/obj/mat/impls/preallocator/ftn-auto/matpreallocatorf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmult.o CC arch-linux-c-opt/obj/mat/impls/preallocator/matpreallocator.o CC arch-linux-c-opt/obj/mat/impls/mffd/mffd.o CC arch-linux-c-opt/obj/mat/impls/mffd/mfregis.o CC arch-linux-c-opt/obj/mat/impls/mffd/mffddef.o CC arch-linux-c-opt/obj/mat/impls/mffd/wp.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffddeff.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-custom/zmffdf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/mumps.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mpidense.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/wpf.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffdf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/aijbaij.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/dense.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiptap.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact11.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiov.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact13.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact81.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat11.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact9.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat14.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolv.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat15.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrann.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgedi.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa4.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaij.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-custom/zbaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-auto/baijf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa7.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-auto/mpibaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-custom/zmpibaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpiaijbaij.o CC arch-linux-c-opt/obj/mat/impls/scatter/mscatter.o CC arch-linux-c-opt/obj/mat/impls/scatter/ftn-auto/mscatterf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpb_baij.o CC arch-linux-c-opt/obj/mat/impls/localref/ftn-auto/mlocalreff.o CC arch-linux-c-opt/obj/mat/impls/centering/ftn-auto/centeringf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij.o CC arch-linux-c-opt/obj/mat/impls/centering/centering.o CC arch-linux-c-opt/obj/mat/impls/localref/mlocalref.o CC arch-linux-c-opt/obj/mat/partition/spartition.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mmbaij.o CC arch-linux-c-opt/obj/mat/partition/ftn-auto/partitionf.o CC arch-linux-c-opt/obj/mat/partition/ftn-custom/zpartitionf.o CC arch-linux-c-opt/obj/dm/dt/space/interface/ftn-auto/spacef.o CC arch-linux-c-opt/obj/mat/partition/partition.o CC arch-linux-c-opt/obj/dm/dt/space/interface/space.o CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/ftn-auto/spaceptrimmedf.o CC arch-linux-c-opt/obj/mat/partition/impls/hierarchical/hierarchical.o CC arch-linux-c-opt/obj/dm/dt/space/impls/point/ftn-auto/spacepointf.o CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/spaceptrimmed.o CC arch-linux-c-opt/obj/dm/dt/space/impls/point/spacepoint.o CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/ftn-auto/spacetensorf.o CC arch-linux-c-opt/obj/mat/impls/blockmat/seq/blockmat.o CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/ftn-auto/spacesumf.o CC arch-linux-c-opt/obj/dm/dt/space/impls/wxy/spacewxy.o CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/ftn-auto/spacesubspacef.o CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/ftn-auto/spacepolyf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/feceed.o CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/spacesum.o CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/spacepoly.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmmod.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-custom/zfef.o CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/spacetensor.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fegeomf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fef.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/baijov.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/fegeom.o CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/spacesubspace.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/fvceed.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-auto/fvf.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-custom/zfvf.o CC arch-linux-c-opt/obj/dm/dt/fe/impls/composite/fecomposite.o CC arch-linux-c-opt/obj/dm/dt/interface/dtprob.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdsf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/fe.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/fv.o CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtdsf90.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtfef.o CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtf90.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtaltvf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtdsf.o CC arch-linux-c-opt/obj/dm/dt/fe/impls/basic/febasic.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtprobf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtweakformf.o CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/ftn-auto/dualspacef.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/ftn-auto/dualspacerefinedf.o CC arch-linux-c-opt/obj/dm/dt/interface/dtweakform.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/dualspacerefined.o CC arch-linux-c-opt/obj/dm/dt/interface/dtaltv.o CC arch-linux-c-opt/obj/dm/dt/interface/dtds.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/ftn-auto/dspacelagrangef.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/ftn-auto/dspacesimplef.o CC arch-linux-c-opt/obj/dm/label/ftn-custom/zdmlabel.o CC arch-linux-c-opt/obj/dm/label/ftn-auto/dmlabelf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpibaij.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/dspacesimple.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/dmlabelephplex.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/ftn-auto/dmlabelephplexf.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/ftn-auto/dmlabelephf.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/dmlabeleph.o CC arch-linux-c-opt/obj/dm/interface/dmceed.o CC arch-linux-c-opt/obj/dm/interface/dlregisdmdm.o CC arch-linux-c-opt/obj/dm/interface/dmgenerate.o CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/dualspace.o CC arch-linux-c-opt/obj/dm/interface/dmget.o CC arch-linux-c-opt/obj/dm/interface/dmglvis.o CC arch-linux-c-opt/obj/dm/interface/dmcoordinates.o CC arch-linux-c-opt/obj/dm/dt/interface/dt.o CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmgetf.o CC arch-linux-c-opt/obj/dm/interface/dmregall.o CC arch-linux-c-opt/obj/dm/interface/dmperiodicity.o CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmcoordinatesf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmgetf.o CC arch-linux-c-opt/obj/dm/interface/dmi.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmperiodicityf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmf.o CC arch-linux-c-opt/obj/dm/field/interface/dlregisdmfield.o CC arch-linux-c-opt/obj/dm/field/interface/dmfieldregi.o CC arch-linux-c-opt/obj/dm/field/interface/ftn-auto/dmfieldf.o CC arch-linux-c-opt/obj/dm/field/interface/dmfield.o CC arch-linux-c-opt/obj/dm/field/impls/shell/dmfieldshell.o CC arch-linux-c-opt/obj/dm/impls/swarm/data_ex.o CC arch-linux-c-opt/obj/dm/impls/swarm/data_bucket.o CC arch-linux-c-opt/obj/dm/field/impls/da/dmfieldda.o CC arch-linux-c-opt/obj/dm/label/dmlabel.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarm_migrate.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_da.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_sort.o CC arch-linux-c-opt/obj/dm/impls/swarm/f90-custom/zswarmf90.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-custom/zswarm.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_plex.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_view.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarm_migratef.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmpicf.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmf.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarm.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic.o CC arch-linux-c-opt/obj/dm/impls/forest/ftn-auto/forestf.o CC arch-linux-c-opt/obj/dm/impls/shell/ftn-auto/dmshellf.o CC arch-linux-c-opt/obj/dm/impls/shell/ftn-custom/zdmshellf.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/dspacelagrange.o CC arch-linux-c-opt/obj/dm/impls/shell/dmshell.o CC arch-linux-c-opt/obj/dm/field/impls/ds/dmfieldds.o CC arch-linux-c-opt/obj/dm/impls/forest/forest.o CC arch-linux-c-opt/obj/dm/impls/stag/stagintern.o CC arch-linux-c-opt/obj/dm/impls/stag/stag1d.o CC arch-linux-c-opt/obj/dm/impls/stag/stagda.o CC arch-linux-c-opt/obj/dm/impls/stag/stag.o CC arch-linux-c-opt/obj/dm/interface/dm.o CC arch-linux-c-opt/obj/dm/impls/stag/stagstencil.o CC arch-linux-c-opt/obj/dm/impls/stag/stagmulti.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcgns.o CC arch-linux-c-opt/obj/dm/impls/plex/plexadapt.o CC arch-linux-c-opt/obj/dm/impls/plex/plexceed.o CC arch-linux-c-opt/obj/dm/impls/stag/stagutils.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcoarsen.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcheckinterface.o CC arch-linux-c-opt/obj/dm/impls/plex/plexegads.o CC arch-linux-c-opt/obj/dm/impls/plex/plexegadslite.o CC arch-linux-c-opt/obj/dm/impls/plex/plexextrude.o CC arch-linux-c-opt/obj/dm/impls/stag/stag2d.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgenerate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfvm.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfluent.o CC arch-linux-c-opt/obj/dm/impls/plex/plexexodusii.o CC arch-linux-c-opt/obj/dm/impls/plex/plexdistribute.o CC arch-linux-c-opt/obj/dm/impls/plex/plexglvis.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5xdmf.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhpddm.o CC arch-linux-c-opt/obj/dm/impls/plex/plexindices.o CC arch-linux-c-opt/obj/dm/impls/plex/plexmed.o CC arch-linux-c-opt/obj/dm/impls/plex/plexmetric.o CC arch-linux-c-opt/obj/dm/impls/stag/stag3d.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgeometry.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcreate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexnatural.o CC arch-linux-c-opt/obj/dm/impls/plex/plexinterpolate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpoint.o CC arch-linux-c-opt/obj/dm/impls/plex/plexply.o CC arch-linux-c-opt/obj/dm/impls/plex/plexrefine.o CC arch-linux-c-opt/obj/dm/impls/plex/plexorient.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgmsh.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfpack.o CC arch-linux-c-opt/obj/dm/impls/plex/plexreorder.o CC arch-linux-c-opt/obj/dm/impls/plex/plexproject.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpreallocate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsection.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpartition.o CC arch-linux-c-opt/obj/dm/impls/plex/pointqueue.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexf90.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexfemf90.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexgeometryf90.o CC arch-linux-c-opt/obj/dm/impls/plex/plexvtk.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexsectionf90.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsfc.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/ftn-auto/plextransformf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/ftn-auto/plextrextrudef.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/1d/plexref1d.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/plexrefregular.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/ftn-auto/plexrefregularf.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfem.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/bl/plexrefbl.o CC arch-linux-c-opt/obj/dm/impls/plex/plexvtu.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/plextrextrude.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/alfeld/plexrefalfeld.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/tobox/plexreftobox.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcgnsf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/filter/plextrfilter.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcheckinterfacef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcreatef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexegadsf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/sbr/plexrefsbr.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexexodusiif.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexdistributef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfemf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfvmf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgeometryf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgmshf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexindicesf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexinterpolatef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexnaturalf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexorientf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpartitionf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexmetricf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpointf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexprojectf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexrefinef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexreorderf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsfcf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plextreef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsubmeshf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexcreate.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexdistribute.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexexodusii.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexextrude.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/plextransform.o CC arch-linux-c-opt/obj/dm/impls/plex/plex.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexfluent.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexgmsh.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexsubmesh.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkcreatef.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkmonitorf.o CC arch-linux-c-opt/obj/dm/impls/network/networkmonitor.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkf.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkviewf.o CC arch-linux-c-opt/obj/dm/impls/patch/ftn-auto/patchcreatef.o CC arch-linux-c-opt/obj/dm/impls/network/networkview.o CC arch-linux-c-opt/obj/dm/impls/patch/patchcreate.o CC arch-linux-c-opt/obj/dm/impls/network/networkcreate.o CC arch-linux-c-opt/obj/dm/impls/composite/f90-custom/zfddaf90.o CC arch-linux-c-opt/obj/dm/impls/composite/ftn-auto/packf.o CC arch-linux-c-opt/obj/dm/impls/composite/ftn-custom/zfddaf.o CC arch-linux-c-opt/obj/dm/impls/patch/patch.o CC arch-linux-c-opt/obj/dm/impls/composite/packm.o CC arch-linux-c-opt/obj/dm/impls/product/product.o CC arch-linux-c-opt/obj/dm/impls/redundant/ftn-auto/dmredundantf.o CC arch-linux-c-opt/obj/dm/impls/product/productutils.o CC arch-linux-c-opt/obj/dm/impls/sliced/sliced.o CC arch-linux-c-opt/obj/dm/impls/redundant/dmredundant.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsubmesh.o CC arch-linux-c-opt/obj/dm/impls/da/da1.o CC arch-linux-c-opt/obj/dm/impls/da/dacorn.o CC arch-linux-c-opt/obj/dm/impls/composite/pack.o CC arch-linux-c-opt/obj/dm/impls/da/da.o CC arch-linux-c-opt/obj/dm/impls/da/dadestroy.o CC arch-linux-c-opt/obj/dm/impls/da/dadist.o CC arch-linux-c-opt/obj/dm/impls/da/dacreate.o CC arch-linux-c-opt/obj/dm/impls/da/dadd.o CC arch-linux-c-opt/obj/dm/impls/plex/plextree.o CC arch-linux-c-opt/obj/dm/impls/da/da2.o CC arch-linux-c-opt/obj/dm/impls/da/dageometry.o CC arch-linux-c-opt/obj/dm/impls/da/daghost.o CC arch-linux-c-opt/obj/dm/impls/da/dagtona.o CC arch-linux-c-opt/obj/dm/impls/da/dagtol.o CC arch-linux-c-opt/obj/dm/impls/da/daindex.o CC arch-linux-c-opt/obj/dm/impls/da/dagetarray.o CC arch-linux-c-opt/obj/dm/impls/da/dagetelem.o CC arch-linux-c-opt/obj/dm/impls/da/daltol.o CC arch-linux-c-opt/obj/dm/impls/da/dapf.o CC arch-linux-c-opt/obj/dm/impls/da/dapreallocate.o CC arch-linux-c-opt/obj/dm/impls/da/dareg.o CC arch-linux-c-opt/obj/dm/impls/da/dascatter.o CC arch-linux-c-opt/obj/dm/impls/da/dalocal.o CC arch-linux-c-opt/obj/dm/impls/da/daview.o CC arch-linux-c-opt/obj/dm/impls/da/dasub.o CC arch-linux-c-opt/obj/dm/impls/da/f90-custom/zda1f90.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda1f.o CC arch-linux-c-opt/obj/dm/impls/da/gr1.o CC arch-linux-c-opt/obj/dm/impls/network/network.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda2f.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda3f.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdacornf.o CC arch-linux-c-opt/obj/dm/impls/da/grglvis.o CC arch-linux-c-opt/obj/dm/impls/da/da3.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdagetscatterf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaindexf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdasubf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaghostf.o CC arch-linux-c-opt/obj/dm/impls/da/gr2.o CC arch-linux-c-opt/obj/dm/impls/da/dainterp.o CC arch-linux-c-opt/obj/dm/impls/da/grvtk.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacornf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaviewf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacreatef.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daddf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dageometryf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dadistf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetarrayf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetelemf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtolf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daindexf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtonaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dalocalf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dainterpf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dapreallocatef.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dasubf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/fddaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/gr1f.o CC arch-linux-c-opt/obj/dm/partitioner/interface/partitionerreg.o CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-custom/zpartitioner.o CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-auto/partitionerf.o CC arch-linux-c-opt/obj/dm/partitioner/impls/chaco/partchaco.o CC arch-linux-c-opt/obj/dm/partitioner/impls/gather/partgather.o CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/ftn-auto/partshellf.o CC arch-linux-c-opt/obj/dm/partitioner/interface/partitioner.o CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/partshell.o CC arch-linux-c-opt/obj/dm/partitioner/impls/ptscotch/partptscotch.o CC arch-linux-c-opt/obj/dm/partitioner/impls/parmetis/partparmetis.o CC arch-linux-c-opt/obj/dm/partitioner/impls/matpart/partmatpart.o CC arch-linux-c-opt/obj/ksp/pc/interface/pcregis.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpcsetf.o CC arch-linux-c-opt/obj/ksp/pc/interface/pcset.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/pcsetf.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpreconf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mat/ftn-auto/pcmatf.o CC arch-linux-c-opt/obj/dm/partitioner/impls/simple/partsimple.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/preconf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mat/pcmat.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/fmg.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgfuncf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/smg.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgadapt.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgfunc.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgfuncf.o CC arch-linux-c-opt/obj/ksp/pc/impls/wb/ftn-auto/wbf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/gdsw.o CC arch-linux-c-opt/obj/ksp/pc/interface/precon.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-auto/bjacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-custom/zbjacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/ftn-auto/pckspf.o CC arch-linux-c-opt/obj/ksp/pc/impls/none/none.o CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/pcksp.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-auto/gasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-custom/zgasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/python/pythonpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/python/ftn-custom/zpythonpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/sor/ftn-auto/sorf.o CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/ftn-auto/hmgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/kaczmarz/kaczmarz.o CC arch-linux-c-opt/obj/ksp/pc/impls/sor/sor.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/ftn-auto/pcisf.o CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/hmg.o CC arch-linux-c-opt/obj/dm/impls/da/fdda.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mg.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/bjacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/pcis.o CC arch-linux-c-opt/obj/ksp/pc/impls/wb/wb.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/nn/nn.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/aggf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-custom/zgamgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/gamgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/util.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-auto/shellpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/ftn-auto/redistributef.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-custom/zshellpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/geo.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/gasm.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/shellpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/agg.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/classical.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/ftn-auto/deflationf.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/bitmask.o CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/redistribute.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/tfs.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflation.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/comm.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/gamg.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/ivec.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflationspace.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xxt.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factimpl.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/lu/lu.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/gs.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/ftn-auto/choleskyf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/qr/qr.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xyt.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factor.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-custom/zluf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-auto/factorf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/cholesky.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/icc/icc.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ilu/ilu.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-custom/zbddcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-auto/bddcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcnullspace.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-auto/fieldsplitf.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-custom/zfieldsplitf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcscalingbasic.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-custom/zcompositef.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-auto/compositef.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/composite.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcfetidp.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_coarsedm.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/ftn-auto/telescopef.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcgraph.o CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/ftn-auto/redundantf.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope.o CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/redundant.o CC arch-linux-c-opt/obj/ksp/pc/impls/lsc/lsc.o CC arch-linux-c-opt/obj/ksp/pc/impls/svd/svd.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddc.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_dmda.o CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/lmvmpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/ftn-auto/lmvmpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-auto/asmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/ftn-auto/jacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-custom/zasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mpi/pcmpi.o CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/jacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/ftn-auto/galerkinf.o CC arch-linux-c-opt/obj/ksp/pc/impls/cp/cp.o CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/galerkin.o CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/ftn-auto/eisenf.o CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/eisen.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/fieldsplit.o CC arch-linux-c-opt/obj/ksp/pc/impls/vpbjacobi/vpbjacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcschurs.o CC arch-linux-c-opt/obj/ksp/ksp/interface/dlregisksp.o CC arch-linux-c-opt/obj/ksp/ksp/interface/dmksp.o CC arch-linux-c-opt/obj/ksp/pc/impls/pbjacobi/pbjacobi.o CC arch-linux-c-opt/obj/ksp/ksp/interface/iguess.o CC arch-linux-c-opt/obj/ksp/ksp/interface/eige.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itcreate.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/asm.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itregis.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itres.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itcl.o CC arch-linux-c-opt/obj/ksp/ksp/interface/xmon.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zdmkspf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/ziguess.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitclf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitcreatef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zxonf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitfuncf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/f90-custom/zitfuncf90.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/eigef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itclf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iguessf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itcreatef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/iterativ.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iterativf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itresf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itfuncf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/kspmatregi.o CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/ftn-auto/schurmf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbadbrdnf.o CC arch-linux-c-opt/obj/ksp/pc/impls/patch/pcpatch.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itfunc.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmimpl.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmutils.o CC arch-linux-c-opt/obj/ksp/ksp/utils/dmproject.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbadbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/ftn-auto/dfpf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/schurm.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/ftn-auto/diagbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/badbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/dfp.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/brdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/ftn-auto/lmvmutilsf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/brdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/badbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/ftn-auto/bfgsf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/ftn-auto/sr1f.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/diagbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/ftn-auto/fischerf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/ftn-auto/dmprojectf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/bfgs.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/sr1.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmpre.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cgs/cgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lcd/lcd.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/fischer.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmreig.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmpref.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-custom/zgmres2f.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/pod/pod.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-auto/modpcff.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/modpcf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/lgmres/lgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pgmres/pgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/ftn-auto/pipefgmresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/fgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresleja.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tsirm/tsirm.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresdeflation.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/ftn-auto/lsqrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/pipefgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/python/pythonksp.o CC arch-linux-c-opt/obj/ksp/ksp/impls/python/ftn-custom/zpythonkspf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/lsqr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/ftn-auto/bcgslf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresorthog.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bicg/bicg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/dgmres/dgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/ftn-auto/minresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgtype.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/ftn-auto/gltrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgeig.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgls.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/bcgsl.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg/pipecg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/ftn-auto/cgtypef.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/stcg/stcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/minres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecgrr/pipecgrr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgne/cgne.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/groppcg/groppcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/gltr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/ftn-auto/fcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/ftn-auto/pipefcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipeprcg/pipeprcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/nash/nash.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg2/pipecg2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/ftn-auto/richscalef.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/richscale.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipelcg/pipelcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/ftn-auto/qcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/fcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/pipefcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/betas.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tfqmr/tfqmr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/rich.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/ftn-auto/chebyf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/qcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/bcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/qmrcgs/qmrcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgs/fbcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/ftn-auto/fetidpf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/symmlq/symmlq.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/ftn-auto/pipegcrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/ftn-auto/gcrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/pipebcgs/pipebcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgsr/fbcgsr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/gcr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/preonly/preonly.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/pipecr/pipecr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/cheby.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/cr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tcqmr/tcqmr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/pipegcr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/ibcgs/ibcgs.o CC arch-linux-c-opt/obj/snes/utils/dmlocalsnes.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmdasnesf.o CC arch-linux-c-opt/obj/snes/utils/convest.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmlocalsnesf.o CC arch-linux-c-opt/obj/snes/utils/dmsnes.o CC arch-linux-c-opt/obj/snes/utils/dmdasnes.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmsnesf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/fetidp.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/convestf.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmadaptf.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmplexsnesf.o CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearchregi.o CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-custom/zlinesearchf.o CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-auto/linesearchf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/ftn-auto/linesearchbtf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/ftn-custom/zlinesearchshellf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/linesearchshell.o CC arch-linux-c-opt/obj/snes/utils/dmadapt.o CC arch-linux-c-opt/obj/snes/linesearch/impls/basic/linesearchbasic.o CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearch.o CC arch-linux-c-opt/obj/snes/linesearch/impls/cp/linesearchcp.o CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/linesearchbt.o CC arch-linux-c-opt/obj/snes/interface/dlregissnes.o CC arch-linux-c-opt/obj/snes/linesearch/impls/nleqerr/linesearchnleqerr.o CC arch-linux-c-opt/obj/snes/linesearch/impls/l2/linesearchl2.o CC arch-linux-c-opt/obj/snes/interface/snesj2.o CC arch-linux-c-opt/obj/snes/interface/snesj.o CC arch-linux-c-opt/obj/snes/interface/snesregi.o CC arch-linux-c-opt/obj/snes/interface/snespc.o CC arch-linux-c-opt/obj/snes/interface/snesob.o CC arch-linux-c-opt/obj/snes/interface/noise/snesdnest.o CC arch-linux-c-opt/obj/snes/interface/f90-custom/zsnesf90.o CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snespcf.o CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snesf.o CC arch-linux-c-opt/obj/snes/interface/noise/snesmfj2.o CC arch-linux-c-opt/obj/snes/interface/noise/snesnoise.o CC arch-linux-c-opt/obj/snes/interface/snesut.o CC arch-linux-c-opt/obj/snes/impls/qn/ftn-auto/qnf.o CC arch-linux-c-opt/obj/snes/utils/dmplexsnes.o CC arch-linux-c-opt/obj/snes/interface/ftn-custom/zsnesf.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasf.o CC arch-linux-c-opt/obj/snes/impls/fas/fasgalerkin.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasgalerkinf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcprivate.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasfuncf.o CC arch-linux-c-opt/obj/snes/impls/qn/qn.o CC arch-linux-c-opt/obj/snes/impls/ntrdc/ftn-auto/ntrdcf.o CC arch-linux-c-opt/obj/snes/impls/shell/snesshell.o CC arch-linux-c-opt/obj/snes/impls/shell/ftn-custom/zsnesshellf.o CC arch-linux-c-opt/obj/snes/impls/shell/ftn-auto/snesshellf.o CC arch-linux-c-opt/obj/snes/impls/fas/fasfunc.o CC arch-linux-c-opt/obj/snes/impls/richardson/snesrichardson.o CC arch-linux-c-opt/obj/snes/impls/composite/ftn-auto/snescompositef.o CC arch-linux-c-opt/obj/snes/impls/gs/ftn-auto/snesgsf.o CC arch-linux-c-opt/obj/snes/impls/ntrdc/ntrdc.o CC arch-linux-c-opt/obj/snes/impls/gs/gssecant.o CC arch-linux-c-opt/obj/snes/impls/gs/snesgs.o CC arch-linux-c-opt/obj/snes/impls/tr/ftn-auto/trf.o CC arch-linux-c-opt/obj/snes/impls/fas/fas.o CC arch-linux-c-opt/obj/snes/impls/vi/ss/ftn-auto/vissf.o CC arch-linux-c-opt/obj/snes/impls/vi/ftn-auto/vif.o CC arch-linux-c-opt/obj/snes/impls/patch/snespatch.o CC arch-linux-c-opt/obj/snes/impls/vi/rs/ftn-auto/virsf.o CC arch-linux-c-opt/obj/snes/impls/multiblock/ftn-auto/multiblockf.o CC arch-linux-c-opt/obj/snes/impls/ksponly/ksponly.o CC arch-linux-c-opt/obj/snes/impls/vi/ss/viss.o CC arch-linux-c-opt/obj/snes/impls/vi/vi.o CC arch-linux-c-opt/obj/snes/impls/tr/tr.o CC arch-linux-c-opt/obj/snes/impls/composite/snescomposite.o CC arch-linux-c-opt/obj/snes/impls/nasm/aspin.o CC arch-linux-c-opt/obj/snes/impls/vi/rs/virs.o CC arch-linux-c-opt/obj/snes/impls/nasm/ftn-auto/nasmf.o CC arch-linux-c-opt/obj/snes/impls/ngmres/ftn-auto/snesngmresf.o CC arch-linux-c-opt/obj/snes/impls/multiblock/multiblock.o CC arch-linux-c-opt/obj/snes/impls/ngmres/anderson.o CC arch-linux-c-opt/obj/snes/impls/python/ftn-custom/zpythonsf.o CC arch-linux-c-opt/obj/snes/impls/python/pythonsnes.o CC arch-linux-c-opt/obj/snes/impls/ngmres/ngmresfunc.o CC arch-linux-c-opt/obj/snes/interface/snes.o CC arch-linux-c-opt/obj/snes/impls/ncg/ftn-auto/snesncgf.o CC arch-linux-c-opt/obj/snes/impls/ngmres/snesngmres.o CC arch-linux-c-opt/obj/snes/impls/ls/ls.o CC arch-linux-c-opt/obj/snes/mf/ftn-auto/snesmfjf.o CC arch-linux-c-opt/obj/snes/mf/snesmfj.o CC arch-linux-c-opt/obj/snes/impls/ncg/snesncg.o CC arch-linux-c-opt/obj/snes/impls/nasm/nasm.o CC arch-linux-c-opt/obj/snes/impls/ms/ms.o CC arch-linux-c-opt/obj/ts/utils/dmnetworkts.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-custom/zlandaucreate.o CC arch-linux-c-opt/obj/ts/utils/dmdats.o CC arch-linux-c-opt/obj/ts/utils/dmlocalts.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-auto/plexlandf.o CC arch-linux-c-opt/obj/ts/event/ftn-auto/tseventf.o CC arch-linux-c-opt/obj/ts/utils/ftn-auto/dmplextsf.o CC arch-linux-c-opt/obj/ts/utils/dmplexts.o CC arch-linux-c-opt/obj/ts/utils/tsconvest.o CC arch-linux-c-opt/obj/ts/utils/dmts.o CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-custom/ztrajf.o CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-auto/trajf.o CC arch-linux-c-opt/obj/ts/trajectory/utils/reconstruct.o CC arch-linux-c-opt/obj/ts/trajectory/impls/singlefile/singlefile.o CC arch-linux-c-opt/obj/ts/trajectory/impls/visualization/trajvisualization.o CC arch-linux-c-opt/obj/ts/trajectory/impls/basic/trajbasic.o CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-custom/ztsadaptf.o CC arch-linux-c-opt/obj/ts/event/tsevent.o CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-auto/tsadaptf.o CC arch-linux-c-opt/obj/ts/trajectory/interface/traj.o CC arch-linux-c-opt/obj/ts/adapt/impls/history/adapthist.o CC arch-linux-c-opt/obj/ts/adapt/impls/history/ftn-auto/adapthistf.o CC arch-linux-c-opt/obj/ts/adapt/impls/none/adaptnone.o CC arch-linux-c-opt/obj/ts/adapt/impls/glee/adaptglee.o CC arch-linux-c-opt/obj/ts/adapt/impls/basic/adaptbasic.o CC arch-linux-c-opt/obj/ts/adapt/impls/cfl/adaptcfl.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-custom/zadaptdspf.o CC arch-linux-c-opt/obj/ts/adapt/interface/tsadapt.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-auto/adaptdspf.o CC arch-linux-c-opt/obj/ts/interface/tscreate.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/adaptdsp.o CC arch-linux-c-opt/obj/ts/interface/dlregists.o CC arch-linux-c-opt/obj/ts/trajectory/impls/memory/trajmemory.o CC arch-linux-c-opt/obj/ts/interface/tsreg.o CC arch-linux-c-opt/obj/ts/interface/tseig.o CC arch-linux-c-opt/obj/ts/interface/tshistory.o CC arch-linux-c-opt/obj/ts/interface/tsregall.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztscreatef.o CC arch-linux-c-opt/obj/ts/interface/tsrhssplit.o CC arch-linux-c-opt/obj/ts/interface/sensitivity/ftn-auto/tssenf.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsregf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-custom/zrkf.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsf.o CC arch-linux-c-opt/obj/ts/interface/ftn-auto/tsf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-auto/rkf.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-custom/zsspf.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-auto/sspf.o CC arch-linux-c-opt/obj/ts/impls/explicit/euler/euler.o CC arch-linux-c-opt/obj/ts/interface/sensitivity/tssen.o CC arch-linux-c-opt/obj/ts/interface/tsmon.o CC arch-linux-c-opt/obj/ts/impls/rosw/ftn-custom/zroswf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/mrk.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ssp.o CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-auto/arkimexf.o CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-custom/zarkimexf.o CC arch-linux-c-opt/obj/ts/impls/pseudo/ftn-auto/posindepf.o CC arch-linux-c-opt/obj/ts/impls/pseudo/posindep.o CC arch-linux-c-opt/obj/ts/impls/python/pythonts.o CC arch-linux-c-opt/obj/ts/impls/symplectic/basicsymplectic/basicsymplectic.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/rk.o CC arch-linux-c-opt/obj/ts/impls/python/ftn-custom/zpythontf.o CC arch-linux-c-opt/obj/ts/impls/eimex/eimex.o CC arch-linux-c-opt/obj/ts/impls/implicit/theta/ftn-auto/thetaf.o CC arch-linux-c-opt/obj/ts/impls/mimex/mimex.o CC arch-linux-c-opt/obj/ts/impls/rosw/rosw.o CC arch-linux-c-opt/obj/ts/impls/glee/glee.o CC arch-linux-c-opt/obj/ts/interface/ts.o CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glleadapt.o CC arch-linux-c-opt/obj/ts/impls/arkimex/arkimex.o CC arch-linux-c-opt/obj/ts/impls/implicit/irk/irk.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha1f.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha1.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha2f.o CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/ftn-auto/tsdiscgradf.o CC arch-linux-c-opt/obj/ts/impls/bdf/ftn-auto/bdff.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha2.o CC arch-linux-c-opt/obj/ts/characteristic/interface/mocregis.o CC arch-linux-c-opt/obj/ts/characteristic/interface/ftn-auto/characteristicf.o CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/tsdiscgrad.o CC arch-linux-c-opt/obj/ts/characteristic/interface/slregis.o CC arch-linux-c-opt/obj/ts/impls/multirate/mprk.o CC arch-linux-c-opt/obj/ts/impls/implicit/theta/theta.o CC arch-linux-c-opt/obj/ts/characteristic/impls/da/slda.o CC arch-linux-c-opt/obj/ts/impls/bdf/bdf.o CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/ftn-auto/blmvmf.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnls/bqnls.o CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/blmvm.o CC arch-linux-c-opt/obj/tao/bound/utils/isutil.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/plexland.o CC arch-linux-c-opt/obj/tao/bound/impls/tron/tron.o CC arch-linux-c-opt/obj/ts/characteristic/interface/characteristic.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnls.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntl.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntr.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnkls.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktl.o CC arch-linux-c-opt/obj/tao/pde_constrained/impls/lcl/lcl.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnk.o CC arch-linux-c-opt/obj/tao/bound/impls/bncg/bncg.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktr.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/ftn-auto/bqnkf.o CC arch-linux-c-opt/obj/tao/shell/ftn-auto/taoshellf.o CC arch-linux-c-opt/obj/tao/shell/taoshell.o CC arch-linux-c-opt/obj/tao/matrix/submatfree.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnk.o CC arch-linux-c-opt/obj/tao/matrix/adamat.o CC arch-linux-c-opt/obj/tao/quadratic/impls/gpcg/gpcg.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/ftn-auto/almmutilsf.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almmutils.o CC arch-linux-c-opt/obj/tao/quadratic/impls/bqpip/bqpip.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-auto/admmf.o CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glle.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-custom/zadmmf.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssls.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssfls.o CC arch-linux-c-opt/obj/tao/linesearch/interface/dlregis_taolinesearch.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssils.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almm.o CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asfls.o CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asils.o CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-auto/taolinesearchf.o CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-custom/ztaolinesearchf.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/admm.o CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/ipm.o CC arch-linux-c-opt/obj/tao/linesearch/impls/gpcglinesearch/gpcglinesearch.o CC arch-linux-c-opt/obj/tao/linesearch/impls/unit/unit.o CC arch-linux-c-opt/obj/tao/linesearch/impls/morethuente/morethuente.o CC arch-linux-c-opt/obj/tao/snes/taosnes.o CC arch-linux-c-opt/obj/tao/linesearch/interface/taolinesearch.o CC arch-linux-c-opt/obj/tao/linesearch/impls/armijo/armijo.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-auto/brgnf.o CC arch-linux-c-opt/obj/tao/linesearch/impls/owarmijo/owarmijo.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-custom/zbrgnf.o CC arch-linux-c-opt/obj/tao/interface/dlregistao.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/gqt.o CC arch-linux-c-opt/obj/tao/interface/fdiff.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/brgn.o CC arch-linux-c-opt/obj/tao/interface/taosolver_bounds.o CC arch-linux-c-opt/obj/tao/interface/taosolverregi.o CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/pdipm.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_boundsf.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_hjf.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_fgf.o CC arch-linux-c-opt/obj/tao/interface/taosolver_fg.o CC arch-linux-c-opt/obj/tao/python/pythontao.o CC arch-linux-c-opt/obj/tao/python/ftn-custom/zpythontaof.o CC arch-linux-c-opt/obj/tao/interface/taosolver_hj.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolverf.o CC arch-linux-c-opt/obj/tao/interface/ftn-custom/ztaosolverf.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/lmvm/lmvm.o CC arch-linux-c-opt/obj/tao/interface/taosolver.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/owlqn/owlqn.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/neldermead/neldermead.o CC arch-linux-c-opt/obj/tao/util/ftn-auto/tao_utilf.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/cg/taocg.o FC arch-linux-c-opt/obj/sys/classes/bag/f2003-src/fsrc/bagenum.o FC arch-linux-c-opt/obj/sys/objects/f2003-src/fsrc/optionenum.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntr/ntr.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntl/ntl.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmswarmmod.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/bmrm/bmrm.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/nls/nls.o CC arch-linux-c-opt/obj/tao/util/tao_util.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmdamod.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/pounders.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmplexmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspdefmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petscpcmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspmod.o FC arch-linux-c-opt/obj/snes/f90-mod/petscsnesmod.o FC arch-linux-c-opt/obj/ts/f90-mod/petsctsmod.o FC arch-linux-c-opt/obj/tao/f90-mod/petsctaomod.o CLINKER arch-linux-c-opt/lib/libpetsc.so.3.019.2 *** Building SLEPc *** Checking environment... done Checking PETSc installation... done Generating Fortran stubs... done Checking LAPACK library... done Checking SCALAPACK... done Writing various configuration files... done ================================================================================ SLEPc Configuration ================================================================================ SLEPc directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc It is a git repository on branch: remotes/origin/jose/test-petsc-branch~2 SLEPc prefix directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt PETSc directory: /home/vrkaka/SLlibs/petsc It is a git repository on branch: main Architecture "arch-linux-c-opt" with double precision real numbers SCALAPACK from SCALAPACK linked by PETSc xxx==========================================================================xxx Configure stage complete. Now build the SLEPc library with: make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt xxx==========================================================================xxx ========================================== Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:20:55 +0300 Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using SLEPc directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc Using PETSc directory: /home/vrkaka/SLlibs/petsc Using PETSc arch: arch-linux-c-opt ----------------------------------------- SLEPC_VERSION_RELEASE 0 SLEPC_VERSION_MAJOR 3 SLEPC_VERSION_MINOR 19 SLEPC_VERSION_SUBMINOR 0 SLEPC_VERSION_DATE "unknown" SLEPC_VERSION_GIT "unknown" SLEPC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using SLEPc configure options: --prefix=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt Using SLEPc configuration flags: #define SLEPC_PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define SLEPC_PETSC_ARCH "arch-linux-c-opt" #define SLEPC_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc" #define SLEPC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define SLEPC_VERSION_GIT "v3.19.0-34-ga2e6dffce" #define SLEPC_VERSION_DATE_GIT "2023-05-09 07:30:59 +0000" #define SLEPC_VERSION_BRANCH_GIT "remotes/origin/jose/test-petsc-branch~2" #define SLEPC_HAVE_SCALAPACK 1 #define SLEPC_SCALAPACK_HAVE_UNDERSCORE 1 #define SLEPC_HAVE_PACKAGES ":scalapack:" ----------------------------------------- PETSC_VERSION_RELEASE 0 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 19 PETSC_VERSION_SUBMINOR 2 PETSC_VERSION_DATE "unknown" PETSC_VERSION_GIT "unknown" PETSC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using PETSc configure options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 Using PETSc configuration flags: #define PETSC_ARCH "arch-linux-c-opt" #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define PETSC_DIR_SEPARATOR '/' #define PETSC_FORTRAN_CHARLEN_T size_t #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_ATTRIBUTEALIGNED 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_ATOMIC 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_CXX_DIALECT_CXX20 1 #define PETSC_HAVE_DLADDR 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_EXECUTABLE_EXPORT 1 #define PETSC_HAVE_EXODUSII 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_HDF5 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MED 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_METIS 1 #define PETSC_HAVE_MKSTEMP 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_MPICH 1 #define PETSC_HAVE_MPICH_NUMVERSION 40101300 #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_LARGE_COUNT 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_MUMPS 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_NETCDF 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_OPENBLAS 1 #define PETSC_HAVE_OPENMP 1 #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" #define PETSC_HAVE_PNETCDF 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_POSIX_MEMALIGN 1 #define PETSC_HAVE_PTHREAD 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_HAVE_REGEX 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_SCALAPACK 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SLEPC 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SOWING 1 #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_STDATOMIC_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HDF5_HAVE_PARALLEL 1 #define PETSC_HDF5_HAVE_ZLIB 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "so" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED __attribute((unused)) #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MALLOC_COALESCED 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC_VERSION_BRANCH_GIT "main" #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 ----------------------------------------- Using C/C++ include paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp Using Fortran include/module paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp ----------------------------------------- Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 ----------------------------------------- Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -lslepc -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ ------------------------------------------ Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec ------------------------------------------ Using MAKE: /usr/bin/gmake Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc ========================================== /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= slepc_libs /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegen.py --petsc-arch=arch-linux-c-opt --pkg-dir=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc --pkg-name=slepc --pkg-pkgs=sys,eps,svd,pep,nep,mfn,lme --pkg-arch=arch-linux-c-opt CC arch-linux-c-opt/obj/sys/ftn-auto/slepcscf.o CC arch-linux-c-opt/obj/sys/ftn-auto/slepcinitf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_startf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_start.o CC arch-linux-c-opt/obj/sys/dlregisslepc.o CC arch-linux-c-opt/obj/sys/slepcutil.o CC arch-linux-c-opt/obj/sys/slepcinit.o CC arch-linux-c-opt/obj/sys/slepcsc.o CC arch-linux-c-opt/obj/sys/slepccontour.o Use "/usr/bin/gmake V=1" to see verbose compile lines, "/usr/bin/gmake V=0" to suppress. FC arch-linux-c-opt/obj/sys/f90-mod/slepcsysmod.o CC arch-linux-c-opt/obj/sys/vec/ftn-auto/vecutilf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepcutil.o CC arch-linux-c-opt/obj/sys/vec/pool.o CC arch-linux-c-opt/obj/sys/mat/ftn-auto/matutilf.o CC arch-linux-c-opt/obj/sys/vec/vecutil.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-custom/zpolygon.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-auto/rgpolygonf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/ftn-auto/rgringf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-custom/zellipse.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-auto/rgellipsef.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/rgellipse.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-custom/zinterval.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-auto/rgintervalf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/rgring.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgregis.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/rgpolygon.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-auto/rgbasicf.o CC arch-linux-c-opt/obj/sys/mat/matutil.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-custom/zrgf.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgbasic.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/ftn-auto/fnphif.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/rginterval.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/ftn-auto/fncombinef.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/fnphi.o CC arch-linux-c-opt/obj/sys/vec/veccomp.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/ftn-custom/zrational.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/sqrt/fnsqrt.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/fnutil.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/fncombine.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/log/fnlog.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnregis.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-auto/fnbasicf.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-custom/zfnf.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/invsqrt/fninvsqrt.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/fnrational.o CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/ftn-auto/cayleyf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/ftn-auto/precondf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/cayley.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/ftn-auto/filterf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/precond.o CC arch-linux-c-opt/obj/sys/classes/st/impls/sinvert/sinvert.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filter.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnbasic.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shift/shift.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/shell.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-auto/shellf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-custom/zshell.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/exp/fnexp.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stregis.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsetf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stset.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stfuncf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-custom/zstf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stshellmat.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stslesf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stfunc.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stsles.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsolvef.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/ftn-auto/bvtensorf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stsolve.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/contiguous/contig.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbiorthog.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/mat/bvmat.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvblas.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/svec/svec.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/vecs/vecs.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvkrylov.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvfunc.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvregis.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/bvtensor.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbasic.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvcontour.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-custom/zbvf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbiorthogf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbasicf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvcontourf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvfuncf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvglobalf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvkrylovf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvopsf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvorthogf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvops.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filtlan.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvglobal.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvlapack.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/ftn-auto/dshsvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/ftn-auto/dssvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/dsutil.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvorthog.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-auto/dspepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-custom/zdspepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/ftn-auto/dsnepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghep/dsghep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhepts/dsnhepts.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/dssvd.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gnhep/dsgnhep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/dspep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhep/dsnhep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/dshsvd.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/dsnep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/hz.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dmerg2.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dlaed3m.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/ftn-auto/dsgsvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsbtdc.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsrtdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dibtdc.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsbasicf.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsbasic.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-custom/zdsf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/invit.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsopsf.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsops.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsprivf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/dshep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/dsghiep.o CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/ftn-auto/lobpcgf.o CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/ftn-auto/rqcgf.o CC arch-linux-c-opt/obj/eps/impls/lyapii/ftn-auto/lyapiif.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dspriv.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/dsgsvd.o CC arch-linux-c-opt/obj/eps/impls/subspace/subspace.o CC arch-linux-c-opt/obj/eps/impls/external/scalapack/scalapack.o CC arch-linux-c-opt/obj/eps/impls/lapack/lapack.o CC arch-linux-c-opt/obj/eps/impls/ciss/ftn-auto/cissf.o CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/rqcg.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdschm.o CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/lobpcg.o CC arch-linux-c-opt/obj/eps/impls/davidson/davidson.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdtestconv.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdinitv.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdgd2.o CC arch-linux-c-opt/obj/eps/impls/lyapii/lyapii.o CC arch-linux-c-opt/obj/eps/impls/davidson/jd/ftn-auto/jdf.o CC arch-linux-c-opt/obj/eps/impls/davidson/gd/ftn-auto/gdf.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdcalcpairs.o CC arch-linux-c-opt/obj/eps/impls/davidson/gd/gd.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdutils.o CC arch-linux-c-opt/obj/eps/impls/davidson/jd/jd.o CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/ftn-auto/lanczosf.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdupdatev.o CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/ftn-auto/arnoldif.o CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/arnoldi.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-indef.o CC arch-linux-c-opt/obj/eps/impls/krylov/epskrylov.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdimprovex.o CC arch-linux-c-opt/obj/eps/impls/ciss/ciss.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-custom/zkrylovschurf.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.o CC arch-linux-c-opt/obj/eps/impls/power/ftn-auto/powerf.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-twosided.o CC arch-linux-c-opt/obj/eps/interface/dlregiseps.o CC arch-linux-c-opt/obj/eps/interface/epsbasic.o CC arch-linux-c-opt/obj/eps/interface/epsregis.o CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/lanczos.o CC arch-linux-c-opt/obj/eps/interface/epsdefault.o CC arch-linux-c-opt/obj/eps/interface/epsmon.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/krylovschur.o CC arch-linux-c-opt/obj/eps/interface/epsopts.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsbasicf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsdefaultf.o CC arch-linux-c-opt/obj/eps/interface/epssetup.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsmonf.o CC arch-linux-c-opt/obj/eps/impls/power/power.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssetupf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsviewf.o CC arch-linux-c-opt/obj/eps/interface/epssolve.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsoptsf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssolvef.o CC arch-linux-c-opt/obj/eps/interface/ftn-custom/zepsf.o CC arch-linux-c-opt/obj/svd/impls/lanczos/ftn-auto/gklanczosf.o CC arch-linux-c-opt/obj/svd/impls/cross/ftn-auto/crossf.o CC arch-linux-c-opt/obj/eps/interface/epsview.o CC arch-linux-c-opt/obj/svd/impls/external/scalapack/svdscalap.o CC arch-linux-c-opt/obj/svd/impls/randomized/rsvd.o CC arch-linux-c-opt/obj/svd/impls/trlanczos/ftn-auto/trlanczosf.o CC arch-linux-c-opt/obj/svd/impls/cyclic/ftn-auto/cyclicf.o CC arch-linux-c-opt/obj/svd/interface/dlregissvd.o CC arch-linux-c-opt/obj/svd/interface/svdbasic.o CC arch-linux-c-opt/obj/svd/impls/lapack/svdlapack.o CC arch-linux-c-opt/obj/svd/impls/lanczos/gklanczos.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-slice.o CC arch-linux-c-opt/obj/svd/interface/svddefault.o CC arch-linux-c-opt/obj/svd/impls/cross/cross.o CC arch-linux-c-opt/obj/svd/interface/svdregis.o CC arch-linux-c-opt/obj/svd/interface/svdmon.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdbasicf.o CC arch-linux-c-opt/obj/svd/interface/svdopts.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svddefaultf.o CC arch-linux-c-opt/obj/svd/interface/svdsetup.o CC arch-linux-c-opt/obj/svd/interface/svdsolve.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdmonf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdoptsf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsetupf.o CC arch-linux-c-opt/obj/svd/interface/ftn-custom/zsvdf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsolvef.o CC arch-linux-c-opt/obj/svd/interface/svdview.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdviewf.o CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/ftn-auto/qarnoldif.o CC arch-linux-c-opt/obj/pep/impls/peputils.o CC arch-linux-c-opt/obj/svd/impls/cyclic/cyclic.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/qslicef.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-custom/zstoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/pepkrylov.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/stoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ftn-auto/ptoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/qarnoldi.o CC arch-linux-c-opt/obj/pep/impls/linear/ftn-auto/linearf.o CC arch-linux-c-opt/obj/pep/impls/linear/qeplin.o CC arch-linux-c-opt/obj/pep/impls/jd/ftn-auto/pjdf.o CC arch-linux-c-opt/obj/pep/interface/dlregispep.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/stoar.o CC arch-linux-c-opt/obj/pep/interface/pepbasic.o CC arch-linux-c-opt/obj/pep/interface/pepmon.o CC arch-linux-c-opt/obj/pep/impls/linear/linear.o CC arch-linux-c-opt/obj/pep/interface/pepdefault.o CC arch-linux-c-opt/obj/svd/impls/trlanczos/trlanczos.o CC arch-linux-c-opt/obj/pep/interface/pepregis.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ptoar.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepbasicf.o CC arch-linux-c-opt/obj/pep/interface/pepopts.o CC arch-linux-c-opt/obj/pep/interface/pepsetup.o CC arch-linux-c-opt/obj/pep/interface/pepsolve.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepdefaultf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepmonf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepoptsf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsetupf.o CC arch-linux-c-opt/obj/pep/interface/ftn-custom/zpepf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepviewf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsolvef.o CC arch-linux-c-opt/obj/pep/interface/peprefine.o CC arch-linux-c-opt/obj/pep/interface/pepview.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/qslice.o CC arch-linux-c-opt/obj/nep/impls/slp/ftn-auto/slpf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-custom/znleigsf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigs-fullbf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigsf.o CC arch-linux-c-opt/obj/nep/impls/interpol/ftn-auto/interpolf.o CC arch-linux-c-opt/obj/nep/impls/slp/slp.o CC arch-linux-c-opt/obj/nep/impls/narnoldi/ftn-auto/narnoldif.o CC arch-linux-c-opt/obj/nep/impls/slp/slp-twosided.o CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs-fullb.o CC arch-linux-c-opt/obj/nep/impls/interpol/interpol.o CC arch-linux-c-opt/obj/nep/impls/rii/ftn-auto/riif.o CC arch-linux-c-opt/obj/nep/interface/dlregisnep.o CC arch-linux-c-opt/obj/nep/impls/narnoldi/narnoldi.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/nrefine.o CC arch-linux-c-opt/obj/nep/interface/nepdefault.o CC arch-linux-c-opt/obj/nep/interface/nepregis.o CC arch-linux-c-opt/obj/nep/impls/rii/rii.o CC arch-linux-c-opt/obj/nep/interface/nepbasic.o CC arch-linux-c-opt/obj/nep/interface/nepmon.o CC arch-linux-c-opt/obj/pep/impls/jd/pjd.o CC arch-linux-c-opt/obj/nep/interface/nepresolv.o CC arch-linux-c-opt/obj/nep/interface/nepopts.o CC arch-linux-c-opt/obj/nep/impls/nepdefl.o CC arch-linux-c-opt/obj/nep/interface/nepsetup.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepdefaultf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepbasicf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepmonf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepoptsf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepresolvf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsetupf.o CC arch-linux-c-opt/obj/nep/interface/nepsolve.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsolvef.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepviewf.o CC arch-linux-c-opt/obj/nep/interface/ftn-custom/znepf.o CC arch-linux-c-opt/obj/mfn/interface/dlregismfn.o CC arch-linux-c-opt/obj/mfn/impls/krylov/mfnkrylov.o CC arch-linux-c-opt/obj/nep/interface/nepview.o CC arch-linux-c-opt/obj/nep/interface/neprefine.o CC arch-linux-c-opt/obj/mfn/interface/mfnmon.o CC arch-linux-c-opt/obj/mfn/interface/mfnregis.o CC arch-linux-c-opt/obj/mfn/impls/expokit/mfnexpokit.o CC arch-linux-c-opt/obj/mfn/interface/mfnopts.o CC arch-linux-c-opt/obj/mfn/interface/mfnbasic.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnbasicf.o CC arch-linux-c-opt/obj/mfn/interface/mfnsolve.o CC arch-linux-c-opt/obj/mfn/interface/mfnsetup.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnmonf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnoptsf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsetupf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsolvef.o CC arch-linux-c-opt/obj/mfn/interface/ftn-custom/zmfnf.o CC arch-linux-c-opt/obj/lme/interface/dlregislme.o CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs.o CC arch-linux-c-opt/obj/lme/interface/lmeregis.o CC arch-linux-c-opt/obj/lme/interface/lmemon.o CC arch-linux-c-opt/obj/lme/impls/krylov/lmekrylov.o CC arch-linux-c-opt/obj/lme/interface/lmebasic.o CC arch-linux-c-opt/obj/lme/interface/lmeopts.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmemonf.o CC arch-linux-c-opt/obj/lme/interface/lmesetup.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmebasicf.o CC arch-linux-c-opt/obj/lme/interface/lmesolve.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmeoptsf.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesolvef.o CC arch-linux-c-opt/obj/lme/interface/lmedense.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesetupf.o CC arch-linux-c-opt/obj/lme/interface/ftn-custom/zlmef.o FC arch-linux-c-opt/obj/sys/classes/rg/f90-mod/slepcrgmod.o FC arch-linux-c-opt/obj/sys/classes/bv/f90-mod/slepcbvmod.o FC arch-linux-c-opt/obj/sys/classes/fn/f90-mod/slepcfnmod.o FC arch-linux-c-opt/obj/lme/f90-mod/slepclmemod.o FC arch-linux-c-opt/obj/sys/classes/ds/f90-mod/slepcdsmod.o FC arch-linux-c-opt/obj/sys/classes/st/f90-mod/slepcstmod.o FC arch-linux-c-opt/obj/mfn/f90-mod/slepcmfnmod.o FC arch-linux-c-opt/obj/eps/f90-mod/slepcepsmod.o FC arch-linux-c-opt/obj/svd/f90-mod/slepcsvdmod.o FC arch-linux-c-opt/obj/pep/f90-mod/slepcpepmod.o FC arch-linux-c-opt/obj/nep/f90-mod/slepcnepmod.o CLINKER arch-linux-c-opt/lib/libslepc.so.3.019.0 Now to install the library do: make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc install ========================================= *** Installing SLEPc *** *** Installing SLEPc at prefix location: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt *** ==================================== Install complete. Now to check if the libraries are working do (in current directory): make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check ==================================== /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc install-builtafterslepc /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc slepc4py-install gmake[6]: Nothing to be done for 'slepc4py-install'. ========================================= Now to check if the libraries are working do: make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check ========================================= and here is the cmake message when configuring the project: vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ cmake .. -- The CXX compiler identification is GNU 11.3.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- MPI headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- MPI library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmpich.so -- GMSH HEADERS NOT FOUND (OPTIONAL) -- GMSH LIBRARY NOT FOUND (OPTIONAL) -- Ipopt headers found at /home/vrkaka/Ipopt/installation/include/coin-or -- Ipopt library found at /home/vrkaka/Ipopt/installation/lib/libipopt.so -- Blas header cblas.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Blas library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libopenblas.so -- Metis headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Metis library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmetis.so -- Mumps headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Mumps library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libcmumps.a -- Petsc header petsc.h found at /home/vrkaka/SLlibs/petsc/include -- Petsc header petscconf.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Petsc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libpetsc.so -- Slepc headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Slepc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libslepc.so -- Configuring done -- Generating done -- Build files have been written to: /home/vrkaka/sparselizardipopt/build After that building the project with cmake goes fine and a simple mpi test works -Kalle -------------- next part -------------- An HTML attachment was scrubbed... URL: From facklerpw at ornl.gov Wed Jun 7 09:10:31 2023 From: facklerpw at ornl.gov (Fackler, Philip) Date: Wed, 7 Jun 2023 14:10:31 +0000 Subject: [petsc-users] Initializing kokkos before petsc causes a problem Message-ID: I'm encountering a problem in xolotl. We initialize kokkos before initializing petsc. Therefore... The pointer referenced here: https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/impls/basic/kokkos/sfkok.kokkos.cxx#L363 from here: https://gitlab.com/petsc/petsc/-/blob/main/include/petsc_kokkos.hpp remains null because the code to initialize it is skipped here: https://gitlab.com/petsc/petsc/-/blob/main/src/sys/objects/kokkos/kinit.kokkos.cxx#L28 See line 71. Can this be modified to allow for kokkos to have been initialized by the application before initializing petsc? Thank you for your help, Philip Fackler Research Software Engineer, Application Engineering Group Advanced Computing Systems Research Section Computer Science and Mathematics Division Oak Ridge National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jun 7 09:33:09 2023 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 7 Jun 2023 09:33:09 -0500 Subject: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT In-Reply-To: References: Message-ID: Does make check work in the PETSc directory? Is it possible the mpiexec in "mpiexec -np 2 ./simulations/default/default 1e2" is not the mpiexec built by PETSc? In the PETSc directory you can run make getmpiexec to see what mpiexec PETSc built. > On Jun 7, 2023, at 6:07 AM, Kalle Karhap?? (TAU) wrote: > > Hi! > > > I am using petsc in a topology optimization project with sparselizard and ipopt. > > > I am hoping to use mpich to run sparselizard/ipopt calculations faster, but I?m getting the following error straight away: > > vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ mpiexec -np 2 ./simulations/default/default 1e2 > [proxy:0:0 at WKS-101259-LT] HYD_pmcd_pmi_parse_pmi_cmd (pm/pmiserv/common.c:57): [proxy:0:0 at WKS-101259-LT] handle_pmi_cmd (pm/pmiserv/pmip_cb.c:115): unable to parse PMI command > [proxy:0:0 at WKS-101259-LT] pmi_cb (pm/pmiserv/pmip_cb.c:362): unable to handle PMI command > [proxy:0:0 at WKS-101259-LT] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status > [proxy:0:0 at WKS-101259-LT] main (pm/pmiserv/pmip.c:169): demux engine error waiting for event > > the problem persists with different numbers of cores -np 1?10. > Sometimes after the previous message there is the bonus error: > > Fatal error in internal_Init: Other MPI error, error stack: > internal_Init(66): MPI_Init(argc=(nil), argv=(nil)) failed > internal_Init(46): Cannot call MPI_INIT or MPI_INIT_THREAD more than once > > > > In petsc configuration I am downloading mpich. Then I?m building the sparselizard project with the same mpich downloaded through petsc installation. > > here is my petsc conf: > ./configure --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3'; > > > > petsc install went as follows: > > vrkaka at WKS-101259-LT:~/sparselizardipopt/install_external_libs$ ./install_petsc.sh > mkdir: cannot create directory ?/home/vrkaka/SLlibs?: File exists > __________________________________________ > FETCHING THE LATEST PETSC VERSION FROM GIT > Cloning into 'petsc'... > remote: Enumerating objects: 1097079, done. > remote: Counting objects: 100% (687/687), done. > remote: Compressing objects: 100% (144/144), done. > remote: Total 1097079 (delta 555), reused 664 (delta 539), pack-reused 1096392 > Receiving objects: 100% (1097079/1097079), 344.72 MiB | 7.14 MiB/s, done. > Resolving deltas: 100% (840415/840415), done. > __________________________________________ > CONFIGURING PETSC > ============================================================================================= > Configuring PETSc to compile on your system > ============================================================================================= > ============================================================================================= > Trying to download > https://github.com/pmodels/mpich/releases/download/v4.1.1/mpich-4.1.1.tar.gz for MPICH > ============================================================================================= > ============================================================================================= > Running configure on MPICH; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make on MPICH; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make install on MPICH; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://bitbucket.org/petsc/pkg-sowing.git for SOWING > ============================================================================================= > ============================================================================================= > Running configure on SOWING; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make on SOWING; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make install on SOWING; this may take several minutes > ============================================================================================= > ============================================================================================= > Running arch-linux-c-opt/bin/bfort to generate Fortran stubs > ============================================================================================= > ============================================================================================= > Trying to download http://www.zlib.net/zlib-1.2.13.tar.gz for ZLIB > ============================================================================================= > ============================================================================================= > Building and installing zlib; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download > https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.2/src/hdf5-1.12.2.tar.bz2 > for HDF5 > ============================================================================================= > ============================================================================================= > Running configure on HDF5; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make on HDF5; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make install on HDF5; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://github.com/parallel-netcdf/pnetcdf for PNETCDF > ============================================================================================= > ============================================================================================= > Running libtoolize on PNETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Running autoreconf on PNETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Running configure on PNETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make on PNETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make install on PNETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://github.com/Unidata/netcdf-c/archive/v4.9.1.tar.gz for NETCDF > ============================================================================================= > ============================================================================================= > Running configure on NETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make on NETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Running make install on NETCDF; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://bitbucket.org/petsc/pkg-med.git for MED > ============================================================================================= > ============================================================================================= > Configuring MED with CMake; this may take several minutes > ============================================================================================= > ============================================================================================= > Compiling and installing MED; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://github.com/gsjaardema/seacas.git for EXODUSII > ============================================================================================= > ============================================================================================= > Configuring EXODUSII with CMake; this may take several minutes > ============================================================================================= > ============================================================================================= > Compiling and installing EXODUSII; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://bitbucket.org/petsc/pkg-metis.git for METIS > ============================================================================================= > ============================================================================================= > Configuring METIS with CMake; this may take several minutes > ============================================================================================= > ============================================================================================= > Compiling and installing METIS; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://github.com/xianyi/OpenBLAS.git for OPENBLAS > ============================================================================================= > ============================================================================================= > Compiling OpenBLAS; this may take several minutes > ============================================================================================= > ============================================================================================= > Installing OpenBLAS > ============================================================================================= > ============================================================================================= > Trying to download https://github.com/Reference-ScaLAPACK/scalapack for SCALAPACK > ============================================================================================= > ============================================================================================= > Configuring SCALAPACK with CMake; this may take several minutes > ============================================================================================= > ============================================================================================= > Compiling and installing SCALAPACK; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://graal.ens-lyon.fr/MUMPS/MUMPS_5.6.0.tar.gz for MUMPS > ============================================================================================= > ============================================================================================= > Compiling MUMPS; this may take several minutes > ============================================================================================= > ============================================================================================= > Installing MUMPS; this may take several minutes > ============================================================================================= > ============================================================================================= > Trying to download https://gitlab.com/slepc/slepc.git for SLEPC > ============================================================================================= > ============================================================================================= > SLEPc examples are available at arch-linux-c-opt/externalpackages/git.slepc > export SLEPC_DIR=arch-linux-c-opt > ============================================================================================= > Compilers: > C Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 -fopenmp > Version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > C++ Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -fopenmp > Version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > Fortran Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp > Version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > Linkers: > Shared linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > Dynamic linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > Libraries linked against: > BlasLapack: > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas > uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads > uses 4 byte integers > MPI: > Version: 4 > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec > Implementation: mpich4 > MPICH_NUMVERSION: 40101300 > MPICH: > python: > Executable: /usr/bin/python3 > openmp: > Version: 201511 > pthread: > cmake: > Version: 3.22.1 > Executable: /usr/bin/cmake > openblas: > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas > uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads > zlib: > Version: 1.2.13 > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lz > hdf5: > Version: 1.12.2 > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lhdf5_hl -lhdf5 > netcdf: > Version: 4.9.1 > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lnetcdf > pnetcdf: > Version: 1.12.3 > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lpnetcdf > metis: > Version: 5.1.0 > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmetis > slepc: > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lslepc > regex: > MUMPS: > Version: 5.6.0 > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -ldmumps -lmumps_common -lpord -lpthread > uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads > scalapack: > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lscalapack > exodusii: > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lexoIIv2for32 -lexodus > med: > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmedC -lmed > sowing: > Version: 1.1.26 > Executable: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/bfort > PETSc: > Language used to compile PETSc: C > PETSC_ARCH: arch-linux-c-opt > PETSC_DIR: /home/vrkaka/SLlibs/petsc > Prefix: > Scalar type: real > Precision: double > Support for __float128 > Integer size: 4 bytes > Single library: yes > Shared libraries: yes > Memory alignment from malloc(): 16 bytes > Using GNU make: /usr/bin/gmake > xxx=========================================================================xxx > Configure stage complete. Now build PETSc libraries with: > make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt all > xxx=========================================================================xxx > __________________________________________ > COMPILING PETSC > /usr/bin/python3 ./config/gmakegen.py --petsc-arch=arch-linux-c-opt > /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegentest.py --petsc-dir=/home/vrkaka/SLlibs/petsc --petsc-arch=arch-linux-c-opt --testdir=./arch-linux-c-opt/tests > make: '/home/vrkaka/SLlibs/petsc' is up to date. > make: 'arch-linux-c-opt' is up to date. > /home/vrkaka/SLlibs/petsc/lib/petsc/bin/petscnagupgrade.py:14: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives > from distutils.version import LooseVersion as Version > ========================================== > > See documentation/faq.html and documentation/bugreporting.html > for help with installation problems. Please send EVERYTHING > printed out below when reporting problems. Please check the > mailing list archives and consider subscribing. > > https://petsc.org/release/community/mailing/ > > ========================================== > Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:19:10 +0300 > Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux > ----------------------------------------- > Using PETSc directory: /home/vrkaka/SLlibs/petsc > Using PETSc arch: arch-linux-c-opt > ----------------------------------------- > PETSC_VERSION_RELEASE 0 > PETSC_VERSION_MAJOR 3 > PETSC_VERSION_MINOR 19 > PETSC_VERSION_SUBMINOR 2 > PETSC_VERSION_DATE "unknown" > PETSC_VERSION_GIT "unknown" > PETSC_VERSION_DATE_GIT "unknown" > ----------------------------------------- > Using configure Options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 > Using configuration flags: > #define PETSC_ARCH "arch-linux-c-opt" > #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) > #define PETSC_BLASLAPACK_UNDERSCORE 1 > #define PETSC_CLANGUAGE_C 1 > #define PETSC_CXX_RESTRICT __restrict > #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) > #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) > #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) > #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) > #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" > #define PETSC_DIR_SEPARATOR '/' > #define PETSC_FORTRAN_CHARLEN_T size_t > #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 > #define PETSC_FUNCTION_NAME_C __func__ > #define PETSC_FUNCTION_NAME_CXX __func__ > #define PETSC_HAVE_ACCESS 1 > #define PETSC_HAVE_ATOLL 1 > #define PETSC_HAVE_ATTRIBUTEALIGNED 1 > #define PETSC_HAVE_BUILTIN_EXPECT 1 > #define PETSC_HAVE_BZERO 1 > #define PETSC_HAVE_C99_COMPLEX 1 > #define PETSC_HAVE_CLOCK 1 > #define PETSC_HAVE_CXX 1 > #define PETSC_HAVE_CXX_ATOMIC 1 > #define PETSC_HAVE_CXX_COMPLEX 1 > #define PETSC_HAVE_CXX_COMPLEX_FIX 1 > #define PETSC_HAVE_CXX_DIALECT_CXX11 1 > #define PETSC_HAVE_CXX_DIALECT_CXX14 1 > #define PETSC_HAVE_CXX_DIALECT_CXX17 1 > #define PETSC_HAVE_CXX_DIALECT_CXX20 1 > #define PETSC_HAVE_DLADDR 1 > #define PETSC_HAVE_DLCLOSE 1 > #define PETSC_HAVE_DLERROR 1 > #define PETSC_HAVE_DLFCN_H 1 > #define PETSC_HAVE_DLOPEN 1 > #define PETSC_HAVE_DLSYM 1 > #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 > #define PETSC_HAVE_DRAND48 1 > #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 > #define PETSC_HAVE_ERF 1 > #define PETSC_HAVE_EXECUTABLE_EXPORT 1 > #define PETSC_HAVE_EXODUSII 1 > #define PETSC_HAVE_FCNTL_H 1 > #define PETSC_HAVE_FENV_H 1 > #define PETSC_HAVE_FE_VALUES 1 > #define PETSC_HAVE_FLOAT_H 1 > #define PETSC_HAVE_FORK 1 > #define PETSC_HAVE_FORTRAN 1 > #define PETSC_HAVE_FORTRAN_FLUSH 1 > #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 > #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 > #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 > #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 > #define PETSC_HAVE_GETCWD 1 > #define PETSC_HAVE_GETDOMAINNAME 1 > #define PETSC_HAVE_GETHOSTBYNAME 1 > #define PETSC_HAVE_GETHOSTNAME 1 > #define PETSC_HAVE_GETPAGESIZE 1 > #define PETSC_HAVE_GETRUSAGE 1 > #define PETSC_HAVE_HDF5 1 > #define PETSC_HAVE_IMMINTRIN_H 1 > #define PETSC_HAVE_INTTYPES_H 1 > #define PETSC_HAVE_ISINF 1 > #define PETSC_HAVE_ISNAN 1 > #define PETSC_HAVE_ISNORMAL 1 > #define PETSC_HAVE_LGAMMA 1 > #define PETSC_HAVE_LOG2 1 > #define PETSC_HAVE_LSEEK 1 > #define PETSC_HAVE_MALLOC_H 1 > #define PETSC_HAVE_MED 1 > #define PETSC_HAVE_MEMMOVE 1 > #define PETSC_HAVE_METIS 1 > #define PETSC_HAVE_MKSTEMP 1 > #define PETSC_HAVE_MMAP 1 > #define PETSC_HAVE_MPICH 1 > #define PETSC_HAVE_MPICH_NUMVERSION 40101300 > #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 > #define PETSC_HAVE_MPIIO 1 > #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 > #define PETSC_HAVE_MPI_COMBINER_DUP 1 > #define PETSC_HAVE_MPI_COMBINER_NAMED 1 > #define PETSC_HAVE_MPI_F90MODULE 1 > #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 > #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 > #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 > #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 > #define PETSC_HAVE_MPI_INIT_THREAD 1 > #define PETSC_HAVE_MPI_INT64_T 1 > #define PETSC_HAVE_MPI_LARGE_COUNT 1 > #define PETSC_HAVE_MPI_LONG_DOUBLE 1 > #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 > #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 > #define PETSC_HAVE_MPI_ONE_SIDED 1 > #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 > #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 > #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 > #define PETSC_HAVE_MPI_RGET 1 > #define PETSC_HAVE_MPI_WIN_CREATE 1 > #define PETSC_HAVE_MUMPS 1 > #define PETSC_HAVE_NANOSLEEP 1 > #define PETSC_HAVE_NETCDF 1 > #define PETSC_HAVE_NETDB_H 1 > #define PETSC_HAVE_NETINET_IN_H 1 > #define PETSC_HAVE_OPENBLAS 1 > #define PETSC_HAVE_OPENMP 1 > #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" > #define PETSC_HAVE_PNETCDF 1 > #define PETSC_HAVE_POPEN 1 > #define PETSC_HAVE_POSIX_MEMALIGN 1 > #define PETSC_HAVE_PTHREAD 1 > #define PETSC_HAVE_PWD_H 1 > #define PETSC_HAVE_RAND 1 > #define PETSC_HAVE_READLINK 1 > #define PETSC_HAVE_REALPATH 1 > #define PETSC_HAVE_REAL___FLOAT128 1 > #define PETSC_HAVE_REGEX 1 > #define PETSC_HAVE_RTLD_GLOBAL 1 > #define PETSC_HAVE_RTLD_LAZY 1 > #define PETSC_HAVE_RTLD_LOCAL 1 > #define PETSC_HAVE_RTLD_NOW 1 > #define PETSC_HAVE_SCALAPACK 1 > #define PETSC_HAVE_SETJMP_H 1 > #define PETSC_HAVE_SLEEP 1 > #define PETSC_HAVE_SLEPC 1 > #define PETSC_HAVE_SNPRINTF 1 > #define PETSC_HAVE_SOCKET 1 > #define PETSC_HAVE_SOWING 1 > #define PETSC_HAVE_SO_REUSEADDR 1 > #define PETSC_HAVE_STDATOMIC_H 1 > #define PETSC_HAVE_STDINT_H 1 > #define PETSC_HAVE_STRCASECMP 1 > #define PETSC_HAVE_STRINGS_H 1 > #define PETSC_HAVE_STRUCT_SIGACTION 1 > #define PETSC_HAVE_SYS_PARAM_H 1 > #define PETSC_HAVE_SYS_PROCFS_H 1 > #define PETSC_HAVE_SYS_RESOURCE_H 1 > #define PETSC_HAVE_SYS_SOCKET_H 1 > #define PETSC_HAVE_SYS_TIMES_H 1 > #define PETSC_HAVE_SYS_TIME_H 1 > #define PETSC_HAVE_SYS_TYPES_H 1 > #define PETSC_HAVE_SYS_UTSNAME_H 1 > #define PETSC_HAVE_SYS_WAIT_H 1 > #define PETSC_HAVE_TAU_PERFSTUBS 1 > #define PETSC_HAVE_TGAMMA 1 > #define PETSC_HAVE_TIME 1 > #define PETSC_HAVE_TIME_H 1 > #define PETSC_HAVE_UNAME 1 > #define PETSC_HAVE_UNISTD_H 1 > #define PETSC_HAVE_USLEEP 1 > #define PETSC_HAVE_VA_COPY 1 > #define PETSC_HAVE_VSNPRINTF 1 > #define PETSC_HAVE_XMMINTRIN_H 1 > #define PETSC_HDF5_HAVE_PARALLEL 1 > #define PETSC_HDF5_HAVE_ZLIB 1 > #define PETSC_INTPTR_T intptr_t > #define PETSC_INTPTR_T_FMT "#" PRIxPTR > #define PETSC_IS_COLORING_MAX USHRT_MAX > #define PETSC_IS_COLORING_VALUE_TYPE short > #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 > #define PETSC_LEVEL1_DCACHE_LINESIZE 64 > #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" > #define PETSC_MAX_PATH_LEN 4096 > #define PETSC_MEMALIGN 16 > #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" > #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT > #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" > #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA > #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 > #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 > #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 > #define PETSC_PYTHON_EXE "/usr/bin/python3" > #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) > #define PETSC_REPLACE_DIR_SEPARATOR '\\' > #define PETSC_SIGNAL_CAST > #define PETSC_SIZEOF_INT 4 > #define PETSC_SIZEOF_LONG 8 > #define PETSC_SIZEOF_LONG_LONG 8 > #define PETSC_SIZEOF_SIZE_T 8 > #define PETSC_SIZEOF_VOID_P 8 > #define PETSC_SLSUFFIX "so" > #define PETSC_UINTPTR_T uintptr_t > #define PETSC_UINTPTR_T_FMT "#" PRIxPTR > #define PETSC_UNUSED __attribute((unused)) > #define PETSC_USE_AVX512_KERNELS 1 > #define PETSC_USE_BACKWARD_LOOP 1 > #define PETSC_USE_CTABLE 1 > #define PETSC_USE_DMLANDAU_2D 1 > #define PETSC_USE_INFO 1 > #define PETSC_USE_ISATTY 1 > #define PETSC_USE_LOG 1 > #define PETSC_USE_MALLOC_COALESCED 1 > #define PETSC_USE_PROC_FOR_SIZE 1 > #define PETSC_USE_REAL_DOUBLE 1 > #define PETSC_USE_SHARED_LIBRARIES 1 > #define PETSC_USE_SINGLE_LIBRARY 1 > #define PETSC_USE_SOCKET_VIEWER 1 > #define PETSC_USE_VISIBILITY_C 1 > #define PETSC_USE_VISIBILITY_CXX 1 > #define PETSC_USING_64BIT_PTR 1 > #define PETSC_USING_F2003 1 > #define PETSC_USING_F90FREEFORM 1 > #define PETSC_VERSION_BRANCH_GIT "main" > #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" > #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" > #define PETSC__BSD_SOURCE 1 > #define PETSC__DEFAULT_SOURCE 1 > #define PETSC__GNU_SOURCE 1 > ----------------------------------------- > Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > mpicc -show: gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi > C compiler version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > mpicxx -show: g++ -Wno-lto-type-mismatch -Wno-psabi -O3 -std=gnu++20 -fPIC -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpicxx -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi > C++ compiler version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > mpif90 -show: gfortran -fPIC -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -O3 -fallow-argument-mismatch -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpifort -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi > Fortran compiler version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > ----------------------------------------- > Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc > Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 > Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 > ----------------------------------------- > Using system modules: > Using mpi.h: # 1 "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include/mpi.h" 1 > ----------------------------------------- > Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ > ------------------------------------------ > Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec > ------------------------------------------ > Using MAKE: /usr/bin/gmake > Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc > ========================================== > /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= libs > FC arch-linux-c-opt/obj/sys/fsrc/somefort.o > CXX arch-linux-c-opt/obj/sys/dll/cxx/demangle.o > FC arch-linux-c-opt/obj/sys/f90-src/fsrc/f90_fwrap.o > CC arch-linux-c-opt/obj/sys/f90-custom/zsysf90.o > FC arch-linux-c-opt/obj/sys/f90-mod/petscsysmod.o > CC arch-linux-c-opt/obj/sys/dll/dlimpl.o > CC arch-linux-c-opt/obj/sys/dll/dl.o > CC arch-linux-c-opt/obj/sys/dll/ftn-auto/regf.o > CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostcontext.o > CC arch-linux-c-opt/obj/sys/ftn-custom/zsys.o > CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostdevice.o > CC arch-linux-c-opt/obj/sys/ftn-custom/zutils.o > CXX arch-linux-c-opt/obj/sys/objects/device/interface/global_dcontext.o > CC arch-linux-c-opt/obj/sys/dll/reg.o > CC arch-linux-c-opt/obj/sys/logging/xmlviewer.o > CC arch-linux-c-opt/obj/sys/logging/utils/stack.o > CC arch-linux-c-opt/obj/sys/logging/utils/classlog.o > CXX arch-linux-c-opt/obj/sys/objects/device/interface/device.o > CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zpetscloghf.o > CC arch-linux-c-opt/obj/sys/logging/utils/stagelog.o > CC arch-linux-c-opt/obj/sys/logging/ftn-auto/xmllogeventf.o > CC arch-linux-c-opt/obj/sys/logging/ftn-auto/plogf.o > CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zplogf.o > CC arch-linux-c-opt/obj/sys/logging/utils/eventlog.o > CC arch-linux-c-opt/obj/sys/python/ftn-custom/zpythonf.o > CC arch-linux-c-opt/obj/sys/utils/arch.o > CXX arch-linux-c-opt/obj/sys/objects/device/interface/memory.o > CC arch-linux-c-opt/obj/sys/python/pythonsys.o > CC arch-linux-c-opt/obj/sys/utils/fhost.o > CC arch-linux-c-opt/obj/sys/utils/fuser.o > CC arch-linux-c-opt/obj/sys/utils/matheq.o > CC arch-linux-c-opt/obj/sys/utils/mathclose.o > CC arch-linux-c-opt/obj/sys/utils/mathfit.o > CC arch-linux-c-opt/obj/sys/utils/mathinf.o > CC arch-linux-c-opt/obj/sys/utils/ctable.o > CC arch-linux-c-opt/obj/sys/utils/memc.o > CC arch-linux-c-opt/obj/sys/utils/mpilong.o > CC arch-linux-c-opt/obj/sys/logging/xmllogevent.o > CC arch-linux-c-opt/obj/sys/utils/mpitr.o > CC arch-linux-c-opt/obj/sys/utils/mpishm.o > CC arch-linux-c-opt/obj/sys/utils/pbarrier.o > CC arch-linux-c-opt/obj/sys/utils/mpiu.o > CC arch-linux-c-opt/obj/sys/utils/psleep.o > CC arch-linux-c-opt/obj/sys/utils/pdisplay.o > CC arch-linux-c-opt/obj/sys/utils/psplit.o > CC arch-linux-c-opt/obj/sys/utils/segbuffer.o > CC arch-linux-c-opt/obj/sys/utils/mpimesg.o > CC arch-linux-c-opt/obj/sys/utils/sortd.o > CC arch-linux-c-opt/obj/sys/utils/sseenabled.o > CC arch-linux-c-opt/obj/sys/utils/sortip.o > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zarchf.o > CC arch-linux-c-opt/obj/sys/utils/mpits.o > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zfhostf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zsortsof.o > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zstrf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/memcf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpitsf.o > CC arch-linux-c-opt/obj/sys/logging/plog.o > CC arch-linux-c-opt/obj/sys/utils/str.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpiuf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psleepf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psplitf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortdf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortipf.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortsof.o > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortif.o > CC arch-linux-c-opt/obj/sys/totalview/tv_data_display.o > CC arch-linux-c-opt/obj/sys/objects/gcomm.o > CC arch-linux-c-opt/obj/sys/objects/gcookie.o > CC arch-linux-c-opt/obj/sys/objects/fcallback.o > CC arch-linux-c-opt/obj/sys/objects/destroy.o > CC arch-linux-c-opt/obj/sys/objects/gtype.o > CC arch-linux-c-opt/obj/sys/utils/sorti.o > CXX arch-linux-c-opt/obj/sys/objects/device/interface/dcontext.o > CC arch-linux-c-opt/obj/sys/objects/olist.o > CC arch-linux-c-opt/obj/sys/objects/garbage.o > CC arch-linux-c-opt/obj/sys/objects/pgname.o > CC arch-linux-c-opt/obj/sys/objects/package.o > CC arch-linux-c-opt/obj/sys/objects/inherit.o > CXX arch-linux-c-opt/obj/sys/objects/device/interface/mark_dcontext.o > CC arch-linux-c-opt/obj/sys/utils/sortso.o > CC arch-linux-c-opt/obj/sys/objects/aoptions.o > CC arch-linux-c-opt/obj/sys/objects/prefix.o > CC arch-linux-c-opt/obj/sys/objects/init.o > CC arch-linux-c-opt/obj/sys/objects/pname.o > CC arch-linux-c-opt/obj/sys/objects/ptype.o > CC arch-linux-c-opt/obj/sys/objects/state.o > CC arch-linux-c-opt/obj/sys/objects/version.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/destroyf.o > CC arch-linux-c-opt/obj/sys/objects/device/util/memory.o > CC arch-linux-c-opt/obj/sys/objects/device/util/devicereg.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcommf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcookief.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/inheritf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/optionsf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/pinitf.o > CC arch-linux-c-opt/obj/sys/objects/tagm.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/statef.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/subcommf.o > CC arch-linux-c-opt/obj/sys/objects/subcomm.o > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/tagmf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgcommf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zdestroyf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgtype.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zinheritf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsyamlf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpackage.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpgnamef.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpnamef.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zprefixf.o > CC arch-linux-c-opt/obj/sys/objects/pinit.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zptypef.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstartf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zversionf.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstart.o > CC arch-linux-c-opt/obj/sys/memory/mhbw.o > CC arch-linux-c-opt/obj/sys/memory/mem.o > CC arch-linux-c-opt/obj/sys/memory/ftn-auto/memf.o > CC arch-linux-c-opt/obj/sys/memory/ftn-custom/zmtrf.o > CC arch-linux-c-opt/obj/sys/memory/mal.o > CC arch-linux-c-opt/obj/sys/memory/ftn-auto/mtrf.o > CC arch-linux-c-opt/obj/sys/perfstubs/pstimer.o > CC arch-linux-c-opt/obj/sys/error/errabort.o > CC arch-linux-c-opt/obj/sys/error/checkptr.o > CC arch-linux-c-opt/obj/sys/error/errstop.o > CC arch-linux-c-opt/obj/sys/error/pstack.o > CC arch-linux-c-opt/obj/sys/error/adebug.o > CC arch-linux-c-opt/obj/sys/error/errtrace.o > CC arch-linux-c-opt/obj/sys/error/fp.o > CC arch-linux-c-opt/obj/sys/memory/mtr.o > CC arch-linux-c-opt/obj/sys/error/signal.o > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsf.o > CC arch-linux-c-opt/obj/sys/error/ftn-auto/adebugf.o > CC arch-linux-c-opt/obj/sys/error/ftn-auto/checkptrf.o > CC arch-linux-c-opt/obj/sys/objects/options.o > CC arch-linux-c-opt/obj/sys/error/ftn-custom/zerrf.o > CC arch-linux-c-opt/obj/sys/error/ftn-auto/errf.o > CC arch-linux-c-opt/obj/sys/error/ftn-auto/fpf.o > CC arch-linux-c-opt/obj/sys/error/ftn-auto/signalf.o > CC arch-linux-c-opt/obj/sys/error/err.o > CC arch-linux-c-opt/obj/sys/fileio/fpath.o > CC arch-linux-c-opt/obj/sys/fileio/fdir.o > CC arch-linux-c-opt/obj/sys/fileio/fwd.o > CC arch-linux-c-opt/obj/sys/fileio/ghome.o > CC arch-linux-c-opt/obj/sys/fileio/ftest.o > CC arch-linux-c-opt/obj/sys/fileio/grpath.o > CC arch-linux-c-opt/obj/sys/fileio/rpath.o > CC arch-linux-c-opt/obj/sys/fileio/mpiuopen.o > CC arch-linux-c-opt/obj/sys/fileio/smatlab.o > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmpiuopenf.o > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zghomef.o > CC arch-linux-c-opt/obj/sys/fileio/fretrieve.o > CC arch-linux-c-opt/obj/sys/fileio/ftn-auto/sysiof.o > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmprintf.o > CC arch-linux-c-opt/obj/sys/info/ftn-auto/verboseinfof.o > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zsysiof.o > CC arch-linux-c-opt/obj/sys/info/ftn-custom/zverboseinfof.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/axis.o > CC arch-linux-c-opt/obj/sys/fileio/mprint.o > CC arch-linux-c-opt/obj/sys/info/verboseinfo.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/bars.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/cmap.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/image.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/axisc.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/dscatter.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/lg.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/zoom.o > CC arch-linux-c-opt/obj/sys/fileio/sysio.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zlgcf.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/hists.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zzoomf.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zaxisf.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/axiscf.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/barsf.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/lgc.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/dscatterf.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/histsf.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dcoor.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dclear.o > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgcf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dellipse.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dflush.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpause.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dline.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmarker.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmouse.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpoint.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawregall.o > CC arch-linux-c-opt/obj/sys/objects/optionsyaml.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/drect.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawreg.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/draw.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtext.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawregf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtextf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dsave.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtrif.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtri.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dclearf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dcoorf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dviewp.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dellipsef.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dflushf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmousef.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmarkerf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dlinef.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpausef.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpointf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawregf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drectf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dsavef.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtextf.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtrif.o > CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dviewpf.o > CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/ftn-auto/drawnullf.o > CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/drawnull.o > CC arch-linux-c-opt/obj/sys/classes/random/interface/dlregisrand.o > CC arch-linux-c-opt/obj/sys/classes/random/interface/random.o > CC arch-linux-c-opt/obj/sys/classes/random/interface/randreg.o > CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomcf.o > CC arch-linux-c-opt/obj/sys/classes/draw/impls/tikz/tikz.o > CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-custom/zrandomf.o > CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomf.o > CC arch-linux-c-opt/obj/sys/classes/random/interface/randomc.o > CC arch-linux-c-opt/obj/sys/classes/random/impls/rand48/rand48.o > CC arch-linux-c-opt/obj/sys/classes/random/impls/rand/rand.o > CC arch-linux-c-opt/obj/sys/classes/bag/ftn-auto/bagf.o > CC arch-linux-c-opt/obj/sys/classes/random/impls/rander48/rander48.o > CC arch-linux-c-opt/obj/sys/classes/bag/ftn-custom/zbagf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dupl.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/flush.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dlregispetsc.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewa.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewers.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewasetf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewregall.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/view.o > CC arch-linux-c-opt/obj/sys/classes/bag/f90-custom/zbagf90.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewaf.o > CC arch-linux-c-opt/obj/sys/classes/draw/impls/image/drawimage.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewregf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/ftn-auto/glvisf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-auto/drawvf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-custom/zdrawvf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-custom/zbinvf.o > CC arch-linux-c-opt/obj/sys/classes/bag/bag.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-auto/binvf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/f90-custom/zbinvf90.o > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewreg.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/ftn-custom/zsendf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-auto/hdf5vf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/ftn-custom/zstringvf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/stringv.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-custom/zhdf5f.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/drawv.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/send.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/ftn-custom/zvtkvf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/glvis.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vu/petscvu.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/vtkv.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zvcreatef.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/filevf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/vcreateaf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/vcreatea.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zfilevf.o > CC arch-linux-c-opt/obj/sys/time/cputime.o > CC arch-linux-c-opt/obj/sys/time/fdate.o > CC arch-linux-c-opt/obj/sys/time/ftn-auto/cputimef.o > CC arch-linux-c-opt/obj/sys/time/ftn-custom/zptimef.o > CC arch-linux-c-opt/obj/sys/f90-src/f90_cwrap.o > CC arch-linux-c-opt/obj/vec/pf/interface/pfall.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/hdf5v.o > CC arch-linux-c-opt/obj/vec/pf/interface/ftn-custom/zpff.o > CC arch-linux-c-opt/obj/vec/pf/interface/ftn-auto/pff.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/binv.o > CC arch-linux-c-opt/obj/vec/pf/impls/constant/const.o > CC arch-linux-c-opt/obj/vec/pf/interface/pf.o > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/filev.o > CC arch-linux-c-opt/obj/vec/pf/impls/string/cstring.o > CC arch-linux-c-opt/obj/vec/is/utils/isio.o > CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zhdf5io.o > CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zisltogf.o > CC arch-linux-c-opt/obj/vec/is/utils/pmap.o > CC arch-linux-c-opt/obj/vec/is/utils/hdf5io.o > CC arch-linux-c-opt/obj/vec/is/utils/f90-custom/zisltogf90.o > CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zvsectionisf.o > CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/isltogf.o > CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/pmapf.o > CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/psortf.o > CC arch-linux-c-opt/obj/vec/is/is/utils/f90-custom/ziscoloringf90.o > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-custom/ziscoloringf.o > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isblockf.o > CC arch-linux-c-opt/obj/vec/is/is/utils/iscomp.o > CC arch-linux-c-opt/obj/vec/is/utils/psort.o > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscompf.o > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscoloringf.o > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isdifff.o > CC arch-linux-c-opt/obj/vec/is/is/utils/isblock.o > CC arch-linux-c-opt/obj/vec/is/is/interface/isreg.o > CC arch-linux-c-opt/obj/vec/is/is/interface/isregall.o > CC arch-linux-c-opt/obj/vec/is/is/interface/f90-custom/zindexf90.o > CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/indexf.o > CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-custom/zindexf.o > CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/isregf.o > CC arch-linux-c-opt/obj/vec/is/is/impls/stride/ftn-auto/stridef.o > CC arch-linux-c-opt/obj/vec/is/is/utils/isdiff.o > CC arch-linux-c-opt/obj/vec/is/is/utils/iscoloring.o > CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-custom/zblockf.o > CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-auto/blockf.o > FC arch-linux-c-opt/obj/vec/f90-mod/petscvecmod.o > CC arch-linux-c-opt/obj/vec/is/is/impls/f90-custom/zblockf90.o > CC arch-linux-c-opt/obj/vec/is/is/impls/stride/stride.o > CC arch-linux-c-opt/obj/vec/is/is/impls/general/ftn-auto/generalf.o > CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-custom/zsectionf.o > CC arch-linux-c-opt/obj/vec/is/section/interface/f90-custom/zvsectionisf90.o > CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-auto/sectionf.o > CC arch-linux-c-opt/obj/vec/is/is/impls/block/block.o > CC arch-linux-c-opt/obj/vec/is/ao/interface/aoreg.o > CC arch-linux-c-opt/obj/vec/is/ao/interface/ao.o > CC arch-linux-c-opt/obj/vec/is/ao/interface/aoregall.o > CC arch-linux-c-opt/obj/vec/is/ao/interface/dlregisdm.o > CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-auto/aof.o > CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-custom/zaof.o > CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/ftn-custom/zaobasicf.o > CC arch-linux-c-opt/obj/vec/is/section/interface/sectionhdf5.o > CC arch-linux-c-opt/obj/vec/is/is/impls/general/general.o > CC arch-linux-c-opt/obj/vec/is/utils/isltog.o > CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-auto/aomappingf.o > CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-custom/zaomappingf.o > CC arch-linux-c-opt/obj/vec/is/is/interface/index.o > CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/aobasic.o > CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-custom/zsfutilsf.o > CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfcoordf.o > CC arch-linux-c-opt/obj/vec/is/sf/utils/f90-custom/zsfutilsf90.o > CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/aomapping.o > CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfutilsf.o > CC arch-linux-c-opt/obj/vec/is/sf/utils/sfcoord.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/dlregissf.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/sfregi.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zsf.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zvscat.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/sftype.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/sff.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/vscatf.o > CC arch-linux-c-opt/obj/vec/is/ao/impls/memscalable/aomemscalable.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gather/sfgather.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gatherv/sfgatherv.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfmpi.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/alltoall/sfalltoall.o > CC arch-linux-c-opt/obj/vec/is/sf/utils/sfutils.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgather/sfallgather.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfbasic.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/vscat.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/neighbor/sfneighbor.o > CC arch-linux-c-opt/obj/vec/vec/utils/vecglvis.o > CC arch-linux-c-opt/obj/vec/is/section/interface/section.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgatherv/sfallgatherv.o > CC arch-linux-c-opt/obj/vec/vec/utils/vecio.o > CC arch-linux-c-opt/obj/vec/vec/utils/vecs.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/dlregistagger.o > CC arch-linux-c-opt/obj/vec/vec/utils/comb.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/window/sfwindow.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/tagger.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/taggerregi.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/ftn-auto/taggerf.o > CC arch-linux-c-opt/obj/vec/vec/utils/vsection.o > CC arch-linux-c-opt/obj/vec/vec/utils/projection.o > CC arch-linux-c-opt/obj/vec/vec/utils/vecstash.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/absolute.o > CC arch-linux-c-opt/obj/vec/is/sf/interface/sf.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/and.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/andor.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/or.o > CC arch-linux-c-opt/obj/vec/vec/utils/f90-custom/zvsectionf90.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/relative.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/simple.o > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/combf.o > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/projectionf.o > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/veciof.o > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vsectionf.o > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vinvf.o > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/cdf.o > CC arch-linux-c-opt/obj/vec/vec/interface/veccreate.o > CC arch-linux-c-opt/obj/vec/vec/interface/vecregall.o > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvecregf.o > CC arch-linux-c-opt/obj/vec/vec/interface/dlregisvec.o > CC arch-linux-c-opt/obj/vec/vec/interface/vecreg.o > CC arch-linux-c-opt/obj/vec/vec/interface/f90-custom/zvectorf90.o > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/veccreatef.o > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/rvectorf.o > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/vectorf.o > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvectorf.o > CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec3.o > CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec1.o > CC arch-linux-c-opt/obj/vec/vec/utils/vinv.o > CC arch-linux-c-opt/obj/vec/vec/impls/seq/vseqcr.o > CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-custom/zbvec2f.o > CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-auto/vseqcrf.o > CC arch-linux-c-opt/obj/vec/vec/impls/shared/ftn-auto/shvecf.o > CC arch-linux-c-opt/obj/vec/vec/impls/shared/shvec.o > CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-custom/zvecnestf.o > CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-auto/vecnestf.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/commonmpvec.o > CC arch-linux-c-opt/obj/vec/vec/impls/seq/dvec2.o > CC arch-linux-c-opt/obj/vec/vec/interface/vector.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/vmpicr.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pvec2.o > CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec2.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-custom/zpbvecf.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/commonmpvecf.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/vmpicrf.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/pbvecf.o > CC arch-linux-c-opt/obj/mat/coarsen/scoarsen.o > CC arch-linux-c-opt/obj/mat/coarsen/ftn-auto/coarsenf.o > CC arch-linux-c-opt/obj/mat/coarsen/ftn-custom/zcoarsenf.o > CC arch-linux-c-opt/obj/vec/vec/interface/rvector.o > CC arch-linux-c-opt/obj/mat/coarsen/coarsen.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pbvec.o > CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/ftn-auto/miskf.o > CC arch-linux-c-opt/obj/vec/vec/impls/nest/vecnest.o > CC arch-linux-c-opt/obj/mat/color/utils/bipartite.o > FC arch-linux-c-opt/obj/mat/f90-mod/petscmatmod.o > CC arch-linux-c-opt/obj/mat/color/utils/valid.o > CC arch-linux-c-opt/obj/mat/coarsen/impls/mis/mis.o > CC arch-linux-c-opt/obj/mat/color/interface/matcoloring.o > CC arch-linux-c-opt/obj/mat/color/interface/matcoloringregi.o > CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/misk.o > CC arch-linux-c-opt/obj/mat/color/interface/ftn-custom/zmatcoloringf.o > CC arch-linux-c-opt/obj/mat/color/interface/ftn-auto/matcoloringf.o > CC arch-linux-c-opt/obj/mat/color/utils/weights.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/degr.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/numsrt.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/dsm.o > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pdvec.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/ido.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/seq.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/setr.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/slo.o > CC arch-linux-c-opt/obj/mat/color/impls/power/power.o > CC arch-linux-c-opt/obj/mat/color/impls/minpack/color.o > CC arch-linux-c-opt/obj/mat/color/impls/natural/natural.o > CC arch-linux-c-opt/obj/mat/utils/bandwidth.o > CC arch-linux-c-opt/obj/mat/utils/compressedrow.o > CC arch-linux-c-opt/obj/mat/utils/convert.o > CC arch-linux-c-opt/obj/mat/utils/freespace.o > CC arch-linux-c-opt/obj/mat/coarsen/impls/hem/hem.o > CC arch-linux-c-opt/obj/mat/utils/getcolv.o > CC arch-linux-c-opt/obj/mat/utils/matio.o > CC arch-linux-c-opt/obj/mat/utils/matstashspace.o > CC arch-linux-c-opt/obj/mat/utils/axpy.o > CC arch-linux-c-opt/obj/mat/color/impls/jp/jp.o > CC arch-linux-c-opt/obj/mat/utils/pheap.o > CC arch-linux-c-opt/obj/mat/utils/gcreate.o > CC arch-linux-c-opt/obj/mat/utils/veccreatematdense.o > CC arch-linux-c-opt/obj/mat/utils/overlapsplit.o > CC arch-linux-c-opt/obj/mat/utils/zerodiag.o > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/axpyf.o > CC arch-linux-c-opt/obj/mat/utils/multequal.o > CC arch-linux-c-opt/obj/mat/utils/zerorows.o > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/bandwidthf.o > CC arch-linux-c-opt/obj/mat/color/impls/greedy/greedy.o > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/gcreatef.o > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/getcolvf.o > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/multequalf.o > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/zerodiagf.o > CC arch-linux-c-opt/obj/mat/order/degree.o > CC arch-linux-c-opt/obj/mat/order/fn1wd.o > CC arch-linux-c-opt/obj/mat/order/fndsep.o > CC arch-linux-c-opt/obj/mat/order/fnroot.o > CC arch-linux-c-opt/obj/mat/order/gen1wd.o > CC arch-linux-c-opt/obj/mat/order/gennd.o > CC arch-linux-c-opt/obj/mat/order/genrcm.o > CC arch-linux-c-opt/obj/mat/order/genqmd.o > CC arch-linux-c-opt/obj/mat/order/qmdqt.o > CC arch-linux-c-opt/obj/mat/order/qmdmrg.o > CC arch-linux-c-opt/obj/mat/order/qmdrch.o > CC arch-linux-c-opt/obj/mat/utils/matstash.o > CC arch-linux-c-opt/obj/mat/order/qmdupd.o > CC arch-linux-c-opt/obj/mat/order/rcm.o > CC arch-linux-c-opt/obj/mat/order/rootls.o > CC arch-linux-c-opt/obj/mat/order/sp1wd.o > CC arch-linux-c-opt/obj/mat/order/spnd.o > CC arch-linux-c-opt/obj/mat/order/spqmd.o > CC arch-linux-c-opt/obj/mat/order/sprcm.o > CC arch-linux-c-opt/obj/mat/order/wbm.o > CC arch-linux-c-opt/obj/mat/order/sregis.o > CC arch-linux-c-opt/obj/mat/order/ftn-custom/zsorderf.o > CC arch-linux-c-opt/obj/mat/order/sorder.o > CC arch-linux-c-opt/obj/mat/order/ftn-auto/spectralf.o > CC arch-linux-c-opt/obj/mat/order/spectral.o > CC arch-linux-c-opt/obj/mat/order/metisnd/metisnd.o > CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatnullf.o > CC arch-linux-c-opt/obj/mat/interface/matregis.o > CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatregf.o > CC arch-linux-c-opt/obj/mat/interface/matreg.o > CC arch-linux-c-opt/obj/mat/interface/matnull.o > CC arch-linux-c-opt/obj/mat/interface/dlregismat.o > CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matnullf.o > CC arch-linux-c-opt/obj/mat/interface/f90-custom/zmatrixf90.o > CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matproductf.o > CC arch-linux-c-opt/obj/mat/ftn-custom/zmat.o > CC arch-linux-c-opt/obj/mat/matfd/ftn-custom/zfdmatrixf.o > CC arch-linux-c-opt/obj/mat/matfd/ftn-auto/fdmatrixf.o > CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matrixf.o > CC arch-linux-c-opt/obj/mat/interface/matproduct.o > CC arch-linux-c-opt/obj/mat/impls/transpose/transm.o > CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatrixf.o > CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/htransmf.o > CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/transmf.o > CC arch-linux-c-opt/obj/mat/impls/transpose/htransm.o > CC arch-linux-c-opt/obj/mat/matfd/fdmatrix.o > CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmf.o > CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmhf.o > CC arch-linux-c-opt/obj/mat/impls/python/ftn-custom/zpythonmf.o > CC arch-linux-c-opt/obj/mat/impls/python/pythonmat.o > CC arch-linux-c-opt/obj/mat/impls/sell/seq/fdsell.o > CC arch-linux-c-opt/obj/mat/impls/sell/seq/ftn-custom/zsellf.o > CC arch-linux-c-opt/obj/mat/impls/normal/normmh.o > CC arch-linux-c-opt/obj/mat/impls/normal/normm.o > CC arch-linux-c-opt/obj/mat/impls/is/ftn-auto/matisf.o > CC arch-linux-c-opt/obj/mat/impls/shell/ftn-auto/shellf.o > CC arch-linux-c-opt/obj/mat/impls/shell/ftn-custom/zshellf.o > CC arch-linux-c-opt/obj/mat/impls/shell/shellcnv.o > CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mmsell.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/aijsbaij.o > CC arch-linux-c-opt/obj/mat/impls/shell/shell.o > CC arch-linux-c-opt/obj/mat/impls/sell/seq/sell.o > CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mpisell.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact10.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact3.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact11.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact12.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij2.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact4.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact5.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact6.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact7.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-custom/zsbaijf.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sro.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact8.o > CC arch-linux-c-opt/obj/mat/impls/is/matis.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-auto/sbaijf.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-custom/zmpisbaijf.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact9.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpiaijsbaij.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-auto/mpisbaijf.o > CC arch-linux-c-opt/obj/mat/impls/kaij/ftn-auto/kaijf.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij.o > CC arch-linux-c-opt/obj/mat/interface/matrix.o > CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-custom/zmpiadjf.o > CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-auto/mpiadjf.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mmsbaij.o > CC arch-linux-c-opt/obj/mat/impls/diagonal/ftn-auto/diagonalf.o > CC arch-linux-c-opt/obj/mat/impls/scalapack/ftn-auto/matscalapackf.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/sbaijov.o > CC arch-linux-c-opt/obj/mat/impls/lrc/ftn-auto/lrcf.o > CC arch-linux-c-opt/obj/mat/impls/diagonal/diagonal.o > CC arch-linux-c-opt/obj/mat/impls/lrc/lrc.o > CC arch-linux-c-opt/obj/mat/impls/fft/ftn-custom/zfftf.o > CC arch-linux-c-opt/obj/mat/impls/fft/fft.o > CC arch-linux-c-opt/obj/mat/impls/dummy/matdummy.o > CC arch-linux-c-opt/obj/mat/impls/submat/ftn-auto/submatf.o > CC arch-linux-c-opt/obj/mat/impls/cdiagonal/ftn-auto/cdiagonalf.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact2.o > CC arch-linux-c-opt/obj/mat/impls/submat/submat.o > CC arch-linux-c-opt/obj/mat/impls/cdiagonal/cdiagonal.o > CC arch-linux-c-opt/obj/mat/impls/maij/ftn-auto/maijf.o > CC arch-linux-c-opt/obj/mat/impls/composite/ftn-auto/mcompositef.o > CC arch-linux-c-opt/obj/mat/impls/adj/mpi/mpiadj.o > CC arch-linux-c-opt/obj/mat/impls/nest/ftn-custom/zmatnestf.o > CC arch-linux-c-opt/obj/mat/impls/nest/ftn-auto/matnestf.o > CC arch-linux-c-opt/obj/mat/impls/kaij/kaij.o > CC arch-linux-c-opt/obj/mat/impls/composite/mcomposite.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijhdf5.o > CC arch-linux-c-opt/obj/mat/impls/scalapack/matscalapack.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/ij.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode2.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/fdaij.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmatmult.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matptap.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matrart.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/mattransposematmult.o > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpisbaij.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/symtranspose.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-custom/zaijf.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-auto/aijf.o > CC arch-linux-c-opt/obj/mat/impls/nest/matnest.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/basfactor.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijsell/aijsell.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/crl/crl.o > CC arch-linux-c-opt/obj/mat/impls/maij/maij.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijfact.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijperm/aijperm.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpb_aij.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaijpc.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/spbas.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmatmult.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimattransposematmult.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mmaij.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/fdmpiaij.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/ftn-auto/mumpsf.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijsell/mpiaijsell.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmult.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-auto/mpiaijf.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijperm/mpiaijperm.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-custom/zmpiaijf.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/crl/mcrl.o > CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-custom/zdensef.o > CC arch-linux-c-opt/obj/mat/impls/dense/seq/densehdf5.o > CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-auto/densef.o > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aij.o > CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mmdense.o > CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-custom/zmpidensef.o > CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-auto/mpidensef.o > CC arch-linux-c-opt/obj/mat/impls/preallocator/ftn-auto/matpreallocatorf.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmult.o > CC arch-linux-c-opt/obj/mat/impls/preallocator/matpreallocator.o > CC arch-linux-c-opt/obj/mat/impls/mffd/mffd.o > CC arch-linux-c-opt/obj/mat/impls/mffd/mfregis.o > CC arch-linux-c-opt/obj/mat/impls/mffd/mffddef.o > CC arch-linux-c-opt/obj/mat/impls/mffd/wp.o > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffddeff.o > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-custom/zmffdf.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/mumps.o > CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mpidense.o > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/wpf.o > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffdf.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/aijbaij.o > CC arch-linux-c-opt/obj/mat/impls/dense/seq/dense.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiptap.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact11.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiov.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact13.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact3.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact2.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact4.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact81.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat1.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat11.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact9.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat14.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact7.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij2.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolv.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat2.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat3.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat15.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat4.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat5.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat6.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran1.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact5.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat7.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran2.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran3.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran4.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran5.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran6.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrann.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran7.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat1.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat2.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat3.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgedi.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat4.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat5.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa3.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat6.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat7.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa4.o > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaij.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa5.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa2.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-custom/zbaijf.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa6.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-auto/baijf.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa7.o > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-auto/mpibaijf.o > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-custom/zmpibaijf.o > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpiaijbaij.o > CC arch-linux-c-opt/obj/mat/impls/scatter/mscatter.o > CC arch-linux-c-opt/obj/mat/impls/scatter/ftn-auto/mscatterf.o > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpb_baij.o > CC arch-linux-c-opt/obj/mat/impls/localref/ftn-auto/mlocalreff.o > CC arch-linux-c-opt/obj/mat/impls/centering/ftn-auto/centeringf.o > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij.o > CC arch-linux-c-opt/obj/mat/impls/centering/centering.o > CC arch-linux-c-opt/obj/mat/impls/localref/mlocalref.o > CC arch-linux-c-opt/obj/mat/partition/spartition.o > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mmbaij.o > CC arch-linux-c-opt/obj/mat/partition/ftn-auto/partitionf.o > CC arch-linux-c-opt/obj/mat/partition/ftn-custom/zpartitionf.o > CC arch-linux-c-opt/obj/dm/dt/space/interface/ftn-auto/spacef.o > CC arch-linux-c-opt/obj/mat/partition/partition.o > CC arch-linux-c-opt/obj/dm/dt/space/interface/space.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/ftn-auto/spaceptrimmedf.o > CC arch-linux-c-opt/obj/mat/partition/impls/hierarchical/hierarchical.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/point/ftn-auto/spacepointf.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/spaceptrimmed.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/point/spacepoint.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/ftn-auto/spacetensorf.o > CC arch-linux-c-opt/obj/mat/impls/blockmat/seq/blockmat.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/ftn-auto/spacesumf.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/wxy/spacewxy.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/ftn-auto/spacesubspacef.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/ftn-auto/spacepolyf.o > CC arch-linux-c-opt/obj/dm/dt/fe/interface/feceed.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/spacesum.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/spacepoly.o > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmmod.o > CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-custom/zfef.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/spacetensor.o > CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fegeomf.o > CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fef.o > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/baijov.o > CC arch-linux-c-opt/obj/dm/dt/fe/interface/fegeom.o > CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/spacesubspace.o > CC arch-linux-c-opt/obj/dm/dt/fv/interface/fvceed.o > CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-auto/fvf.o > CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-custom/zfvf.o > CC arch-linux-c-opt/obj/dm/dt/fe/impls/composite/fecomposite.o > CC arch-linux-c-opt/obj/dm/dt/interface/dtprob.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdsf.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtf.o > CC arch-linux-c-opt/obj/dm/dt/fe/interface/fe.o > CC arch-linux-c-opt/obj/dm/dt/fv/interface/fv.o > CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtdsf90.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtfef.o > CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtf90.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtaltvf.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtf.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtdsf.o > CC arch-linux-c-opt/obj/dm/dt/fe/impls/basic/febasic.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtprobf.o > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtweakformf.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/ftn-auto/dualspacef.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/ftn-auto/dualspacerefinedf.o > CC arch-linux-c-opt/obj/dm/dt/interface/dtweakform.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/dualspacerefined.o > CC arch-linux-c-opt/obj/dm/dt/interface/dtaltv.o > CC arch-linux-c-opt/obj/dm/dt/interface/dtds.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/ftn-auto/dspacelagrangef.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/ftn-auto/dspacesimplef.o > CC arch-linux-c-opt/obj/dm/label/ftn-custom/zdmlabel.o > CC arch-linux-c-opt/obj/dm/label/ftn-auto/dmlabelf.o > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpibaij.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/dspacesimple.o > CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/dmlabelephplex.o > CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/ftn-auto/dmlabelephplexf.o > CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/ftn-auto/dmlabelephf.o > CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/dmlabeleph.o > CC arch-linux-c-opt/obj/dm/interface/dmceed.o > CC arch-linux-c-opt/obj/dm/interface/dlregisdmdm.o > CC arch-linux-c-opt/obj/dm/interface/dmgenerate.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/dualspace.o > CC arch-linux-c-opt/obj/dm/interface/dmget.o > CC arch-linux-c-opt/obj/dm/interface/dmglvis.o > CC arch-linux-c-opt/obj/dm/interface/dmcoordinates.o > CC arch-linux-c-opt/obj/dm/dt/interface/dt.o > CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmgetf.o > CC arch-linux-c-opt/obj/dm/interface/dmregall.o > CC arch-linux-c-opt/obj/dm/interface/dmperiodicity.o > CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmf.o > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmcoordinatesf.o > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmgetf.o > CC arch-linux-c-opt/obj/dm/interface/dmi.o > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmperiodicityf.o > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmf.o > CC arch-linux-c-opt/obj/dm/field/interface/dlregisdmfield.o > CC arch-linux-c-opt/obj/dm/field/interface/dmfieldregi.o > CC arch-linux-c-opt/obj/dm/field/interface/ftn-auto/dmfieldf.o > CC arch-linux-c-opt/obj/dm/field/interface/dmfield.o > CC arch-linux-c-opt/obj/dm/field/impls/shell/dmfieldshell.o > CC arch-linux-c-opt/obj/dm/impls/swarm/data_ex.o > CC arch-linux-c-opt/obj/dm/impls/swarm/data_bucket.o > CC arch-linux-c-opt/obj/dm/field/impls/da/dmfieldda.o > CC arch-linux-c-opt/obj/dm/label/dmlabel.o > CC arch-linux-c-opt/obj/dm/impls/swarm/swarm_migrate.o > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_da.o > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_sort.o > CC arch-linux-c-opt/obj/dm/impls/swarm/f90-custom/zswarmf90.o > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-custom/zswarm.o > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_plex.o > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_view.o > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarm_migratef.o > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmpicf.o > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmf.o > CC arch-linux-c-opt/obj/dm/impls/swarm/swarm.o > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic.o > CC arch-linux-c-opt/obj/dm/impls/forest/ftn-auto/forestf.o > CC arch-linux-c-opt/obj/dm/impls/shell/ftn-auto/dmshellf.o > CC arch-linux-c-opt/obj/dm/impls/shell/ftn-custom/zdmshellf.o > CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/dspacelagrange.o > CC arch-linux-c-opt/obj/dm/impls/shell/dmshell.o > CC arch-linux-c-opt/obj/dm/field/impls/ds/dmfieldds.o > CC arch-linux-c-opt/obj/dm/impls/forest/forest.o > CC arch-linux-c-opt/obj/dm/impls/stag/stagintern.o > CC arch-linux-c-opt/obj/dm/impls/stag/stag1d.o > CC arch-linux-c-opt/obj/dm/impls/stag/stagda.o > CC arch-linux-c-opt/obj/dm/impls/stag/stag.o > CC arch-linux-c-opt/obj/dm/interface/dm.o > CC arch-linux-c-opt/obj/dm/impls/stag/stagstencil.o > CC arch-linux-c-opt/obj/dm/impls/stag/stagmulti.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexcgns.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexadapt.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexceed.o > CC arch-linux-c-opt/obj/dm/impls/stag/stagutils.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexcoarsen.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexcheckinterface.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexegads.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexegadslite.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexextrude.o > CC arch-linux-c-opt/obj/dm/impls/stag/stag2d.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexgenerate.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexfvm.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexfluent.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexexodusii.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexdistribute.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexglvis.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5xdmf.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexhpddm.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexindices.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexmed.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexmetric.o > CC arch-linux-c-opt/obj/dm/impls/stag/stag3d.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexgeometry.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexcreate.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexnatural.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexinterpolate.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexpoint.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexply.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexrefine.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexorient.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexgmsh.o > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfpack.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexreorder.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexproject.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexpreallocate.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexsection.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexpartition.o > CC arch-linux-c-opt/obj/dm/impls/plex/pointqueue.o > CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexf90.o > CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexfemf90.o > CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexgeometryf90.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexvtk.o > CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexsectionf90.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexsfc.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/ftn-auto/plextransformf.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/ftn-auto/plextrextrudef.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/1d/plexref1d.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/plexrefregular.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/ftn-auto/plexrefregularf.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexfem.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/bl/plexrefbl.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexvtu.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/plextrextrude.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/alfeld/plexrefalfeld.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/tobox/plexreftobox.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcgnsf.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/filter/plextrfilter.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcheckinterfacef.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcreatef.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexegadsf.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/sbr/plexrefsbr.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexexodusiif.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexdistributef.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfemf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfvmf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgeometryf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgmshf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexindicesf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexinterpolatef.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexnaturalf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexorientf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpartitionf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexmetricf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpointf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexprojectf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexrefinef.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexreorderf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsfcf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plextreef.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsubmeshf.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexcreate.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexdistribute.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexexodusii.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexextrude.o > CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/plextransform.o > CC arch-linux-c-opt/obj/dm/impls/plex/plex.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexfluent.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexgmsh.o > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexsubmesh.o > CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkcreatef.o > CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkmonitorf.o > CC arch-linux-c-opt/obj/dm/impls/network/networkmonitor.o > CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkf.o > CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkviewf.o > CC arch-linux-c-opt/obj/dm/impls/patch/ftn-auto/patchcreatef.o > CC arch-linux-c-opt/obj/dm/impls/network/networkview.o > CC arch-linux-c-opt/obj/dm/impls/patch/patchcreate.o > CC arch-linux-c-opt/obj/dm/impls/network/networkcreate.o > CC arch-linux-c-opt/obj/dm/impls/composite/f90-custom/zfddaf90.o > CC arch-linux-c-opt/obj/dm/impls/composite/ftn-auto/packf.o > CC arch-linux-c-opt/obj/dm/impls/composite/ftn-custom/zfddaf.o > CC arch-linux-c-opt/obj/dm/impls/patch/patch.o > CC arch-linux-c-opt/obj/dm/impls/composite/packm.o > CC arch-linux-c-opt/obj/dm/impls/product/product.o > CC arch-linux-c-opt/obj/dm/impls/redundant/ftn-auto/dmredundantf.o > CC arch-linux-c-opt/obj/dm/impls/product/productutils.o > CC arch-linux-c-opt/obj/dm/impls/sliced/sliced.o > CC arch-linux-c-opt/obj/dm/impls/redundant/dmredundant.o > CC arch-linux-c-opt/obj/dm/impls/plex/plexsubmesh.o > CC arch-linux-c-opt/obj/dm/impls/da/da1.o > CC arch-linux-c-opt/obj/dm/impls/da/dacorn.o > CC arch-linux-c-opt/obj/dm/impls/composite/pack.o > CC arch-linux-c-opt/obj/dm/impls/da/da.o > CC arch-linux-c-opt/obj/dm/impls/da/dadestroy.o > CC arch-linux-c-opt/obj/dm/impls/da/dadist.o > CC arch-linux-c-opt/obj/dm/impls/da/dacreate.o > CC arch-linux-c-opt/obj/dm/impls/da/dadd.o > CC arch-linux-c-opt/obj/dm/impls/plex/plextree.o > CC arch-linux-c-opt/obj/dm/impls/da/da2.o > CC arch-linux-c-opt/obj/dm/impls/da/dageometry.o > CC arch-linux-c-opt/obj/dm/impls/da/daghost.o > CC arch-linux-c-opt/obj/dm/impls/da/dagtona.o > CC arch-linux-c-opt/obj/dm/impls/da/dagtol.o > CC arch-linux-c-opt/obj/dm/impls/da/daindex.o > CC arch-linux-c-opt/obj/dm/impls/da/dagetarray.o > CC arch-linux-c-opt/obj/dm/impls/da/dagetelem.o > CC arch-linux-c-opt/obj/dm/impls/da/daltol.o > CC arch-linux-c-opt/obj/dm/impls/da/dapf.o > CC arch-linux-c-opt/obj/dm/impls/da/dapreallocate.o > CC arch-linux-c-opt/obj/dm/impls/da/dareg.o > CC arch-linux-c-opt/obj/dm/impls/da/dascatter.o > CC arch-linux-c-opt/obj/dm/impls/da/dalocal.o > CC arch-linux-c-opt/obj/dm/impls/da/daview.o > CC arch-linux-c-opt/obj/dm/impls/da/dasub.o > CC arch-linux-c-opt/obj/dm/impls/da/f90-custom/zda1f90.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda1f.o > CC arch-linux-c-opt/obj/dm/impls/da/gr1.o > CC arch-linux-c-opt/obj/dm/impls/network/network.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda2f.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda3f.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdacornf.o > CC arch-linux-c-opt/obj/dm/impls/da/grglvis.o > CC arch-linux-c-opt/obj/dm/impls/da/da3.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdagetscatterf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaindexf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdasubf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaghostf.o > CC arch-linux-c-opt/obj/dm/impls/da/gr2.o > CC arch-linux-c-opt/obj/dm/impls/da/dainterp.o > CC arch-linux-c-opt/obj/dm/impls/da/grvtk.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacornf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaviewf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacreatef.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daddf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dageometryf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dadistf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetarrayf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetelemf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtolf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daindexf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtonaf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dalocalf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dainterpf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dapreallocatef.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dasubf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/fddaf.o > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/gr1f.o > CC arch-linux-c-opt/obj/dm/partitioner/interface/partitionerreg.o > CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-custom/zpartitioner.o > CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-auto/partitionerf.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/chaco/partchaco.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/gather/partgather.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/ftn-auto/partshellf.o > CC arch-linux-c-opt/obj/dm/partitioner/interface/partitioner.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/partshell.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/ptscotch/partptscotch.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/parmetis/partparmetis.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/matpart/partmatpart.o > CC arch-linux-c-opt/obj/ksp/pc/interface/pcregis.o > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpcsetf.o > CC arch-linux-c-opt/obj/ksp/pc/interface/pcset.o > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/pcsetf.o > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpreconf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mat/ftn-auto/pcmatf.o > CC arch-linux-c-opt/obj/dm/partitioner/impls/simple/partsimple.o > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/preconf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mat/pcmat.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/fmg.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgfuncf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/smg.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgadapt.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgfunc.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgfuncf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/wb/ftn-auto/wbf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/gdsw.o > CC arch-linux-c-opt/obj/ksp/pc/interface/precon.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-auto/bjacobif.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-custom/zbjacobif.o > CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/ftn-auto/pckspf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/none/none.o > CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/pcksp.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-auto/gasmf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-custom/zgasmf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/python/pythonpc.o > CC arch-linux-c-opt/obj/ksp/pc/impls/python/ftn-custom/zpythonpcf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/sor/ftn-auto/sorf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/ftn-auto/hmgf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/kaczmarz/kaczmarz.o > CC arch-linux-c-opt/obj/ksp/pc/impls/sor/sor.o > CC arch-linux-c-opt/obj/ksp/pc/impls/is/ftn-auto/pcisf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/hmg.o > CC arch-linux-c-opt/obj/dm/impls/da/fdda.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mg.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/bjacobi.o > CC arch-linux-c-opt/obj/ksp/pc/impls/is/pcis.o > CC arch-linux-c-opt/obj/ksp/pc/impls/wb/wb.o > CC arch-linux-c-opt/obj/ksp/pc/impls/is/nn/nn.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/aggf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-custom/zgamgf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/gamgf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/util.o > CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-auto/shellpcf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/ftn-auto/redistributef.o > CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-custom/zshellpcf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/geo.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/gasm.o > CC arch-linux-c-opt/obj/ksp/pc/impls/shell/shellpc.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/agg.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/classical.o > CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/ftn-auto/deflationf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/bitmask.o > CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/redistribute.o > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/tfs.o > CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflation.o > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/comm.o > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/gamg.o > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/ivec.o > CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflationspace.o > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xxt.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factimpl.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/lu/lu.o > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/gs.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/ftn-auto/choleskyf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/qr/qr.o > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xyt.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factor.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-custom/zluf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-auto/factorf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/cholesky.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/icc/icc.o > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ilu/ilu.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-custom/zbddcf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-auto/bddcf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcnullspace.o > CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-auto/fieldsplitf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-custom/zfieldsplitf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcscalingbasic.o > CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-custom/zcompositef.o > CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-auto/compositef.o > CC arch-linux-c-opt/obj/ksp/pc/impls/composite/composite.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcfetidp.o > CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_coarsedm.o > CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/ftn-auto/telescopef.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcgraph.o > CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/ftn-auto/redundantf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope.o > CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/redundant.o > CC arch-linux-c-opt/obj/ksp/pc/impls/lsc/lsc.o > CC arch-linux-c-opt/obj/ksp/pc/impls/svd/svd.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddc.o > CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_dmda.o > CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/lmvmpc.o > CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/ftn-auto/lmvmpcf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-auto/asmf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/ftn-auto/jacobif.o > CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-custom/zasmf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/mpi/pcmpi.o > CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/jacobi.o > CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/ftn-auto/galerkinf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/cp/cp.o > CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/galerkin.o > CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/ftn-auto/eisenf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/eisen.o > CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/fieldsplit.o > CC arch-linux-c-opt/obj/ksp/pc/impls/vpbjacobi/vpbjacobi.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcschurs.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/dlregisksp.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/dmksp.o > CC arch-linux-c-opt/obj/ksp/pc/impls/pbjacobi/pbjacobi.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/iguess.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/eige.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/itcreate.o > CC arch-linux-c-opt/obj/ksp/pc/impls/asm/asm.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/itregis.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/itres.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/itcl.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/xmon.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zdmkspf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/ziguess.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitclf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitcreatef.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zxonf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitfuncf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/f90-custom/zitfuncf90.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/eigef.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itclf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iguessf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itcreatef.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/iterativ.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iterativf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itresf.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itfuncf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/kspmatregi.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/ftn-auto/schurmf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbadbrdnf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/patch/pcpatch.o > CC arch-linux-c-opt/obj/ksp/ksp/interface/itfunc.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmimpl.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmutils.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/dmproject.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbadbrdn.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbrdnf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/ftn-auto/dfpf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/schurm.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/ftn-auto/diagbrdnf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/badbrdnf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/dfp.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/brdnf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/ftn-auto/lmvmutilsf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbrdn.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/brdn.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/badbrdn.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/ftn-auto/bfgsf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/ftn-auto/sr1f.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/diagbrdn.o > CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/ftn-auto/fischerf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/ftn-auto/dmprojectf.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/bfgs.o > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/sr1.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmpre.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cgs/cgs.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog2.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/lcd/lcd.o > CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/fischer.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres2.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmreig.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmpref.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-custom/zgmres2f.o > CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/pod/pod.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmresf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-auto/modpcff.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/modpcf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/lgmres/lgmres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pgmres/pgmres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/ftn-auto/pipefgmresf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/fgmres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresleja.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/tsirm/tsirm.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresdeflation.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/ftn-auto/lsqrf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/pipefgmres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/python/pythonksp.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/python/ftn-custom/zpythonkspf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/lsqr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/ftn-auto/bcgslf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresorthog.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bicg/bicg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/dgmres/dgmres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/ftn-auto/minresf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgtype.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/ftn-auto/gltrf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgeig.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgls.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/bcgsl.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg/pipecg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/ftn-auto/cgtypef.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/stcg/stcg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/minres.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecgrr/pipecgrr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgne/cgne.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/groppcg/groppcg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/gltr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/ftn-auto/fcgf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/ftn-auto/pipefcgf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipeprcg/pipeprcg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/nash/nash.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg2/pipecg2.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/ftn-auto/richscalef.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/richscale.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipelcg/pipelcg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/ftn-auto/qcgf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/fcg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/pipefcg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/betas.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/tfqmr/tfqmr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/rich.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/ftn-auto/chebyf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/qcg.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/bcgs.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/qmrcgs/qmrcgs.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgs/fbcgs.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/ftn-auto/fetidpf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/symmlq/symmlq.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/ftn-auto/pipegcrf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/ftn-auto/gcrf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/pipebcgs/pipebcgs.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgsr/fbcgsr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/gcr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/preonly/preonly.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/pipecr/pipecr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/cheby.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/cr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/tcqmr/tcqmr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/pipegcr.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/ibcgs/ibcgs.o > CC arch-linux-c-opt/obj/snes/utils/dmlocalsnes.o > CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmdasnesf.o > CC arch-linux-c-opt/obj/snes/utils/convest.o > CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmlocalsnesf.o > CC arch-linux-c-opt/obj/snes/utils/dmsnes.o > CC arch-linux-c-opt/obj/snes/utils/dmdasnes.o > CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmsnesf.o > CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/fetidp.o > CC arch-linux-c-opt/obj/snes/utils/ftn-auto/convestf.o > CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmadaptf.o > CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmplexsnesf.o > CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearchregi.o > CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-custom/zlinesearchf.o > CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-auto/linesearchf.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/ftn-auto/linesearchbtf.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/ftn-custom/zlinesearchshellf.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/linesearchshell.o > CC arch-linux-c-opt/obj/snes/utils/dmadapt.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/basic/linesearchbasic.o > CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearch.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/cp/linesearchcp.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/linesearchbt.o > CC arch-linux-c-opt/obj/snes/interface/dlregissnes.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/nleqerr/linesearchnleqerr.o > CC arch-linux-c-opt/obj/snes/linesearch/impls/l2/linesearchl2.o > CC arch-linux-c-opt/obj/snes/interface/snesj2.o > CC arch-linux-c-opt/obj/snes/interface/snesj.o > CC arch-linux-c-opt/obj/snes/interface/snesregi.o > CC arch-linux-c-opt/obj/snes/interface/snespc.o > CC arch-linux-c-opt/obj/snes/interface/snesob.o > CC arch-linux-c-opt/obj/snes/interface/noise/snesdnest.o > CC arch-linux-c-opt/obj/snes/interface/f90-custom/zsnesf90.o > CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snespcf.o > CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snesf.o > CC arch-linux-c-opt/obj/snes/interface/noise/snesmfj2.o > CC arch-linux-c-opt/obj/snes/interface/noise/snesnoise.o > CC arch-linux-c-opt/obj/snes/interface/snesut.o > CC arch-linux-c-opt/obj/snes/impls/qn/ftn-auto/qnf.o > CC arch-linux-c-opt/obj/snes/utils/dmplexsnes.o > CC arch-linux-c-opt/obj/snes/interface/ftn-custom/zsnesf.o > CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasf.o > CC arch-linux-c-opt/obj/snes/impls/fas/fasgalerkin.o > CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasgalerkinf.o > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcprivate.o > CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasfuncf.o > CC arch-linux-c-opt/obj/snes/impls/qn/qn.o > CC arch-linux-c-opt/obj/snes/impls/ntrdc/ftn-auto/ntrdcf.o > CC arch-linux-c-opt/obj/snes/impls/shell/snesshell.o > CC arch-linux-c-opt/obj/snes/impls/shell/ftn-custom/zsnesshellf.o > CC arch-linux-c-opt/obj/snes/impls/shell/ftn-auto/snesshellf.o > CC arch-linux-c-opt/obj/snes/impls/fas/fasfunc.o > CC arch-linux-c-opt/obj/snes/impls/richardson/snesrichardson.o > CC arch-linux-c-opt/obj/snes/impls/composite/ftn-auto/snescompositef.o > CC arch-linux-c-opt/obj/snes/impls/gs/ftn-auto/snesgsf.o > CC arch-linux-c-opt/obj/snes/impls/ntrdc/ntrdc.o > CC arch-linux-c-opt/obj/snes/impls/gs/gssecant.o > CC arch-linux-c-opt/obj/snes/impls/gs/snesgs.o > CC arch-linux-c-opt/obj/snes/impls/tr/ftn-auto/trf.o > CC arch-linux-c-opt/obj/snes/impls/fas/fas.o > CC arch-linux-c-opt/obj/snes/impls/vi/ss/ftn-auto/vissf.o > CC arch-linux-c-opt/obj/snes/impls/vi/ftn-auto/vif.o > CC arch-linux-c-opt/obj/snes/impls/patch/snespatch.o > CC arch-linux-c-opt/obj/snes/impls/vi/rs/ftn-auto/virsf.o > CC arch-linux-c-opt/obj/snes/impls/multiblock/ftn-auto/multiblockf.o > CC arch-linux-c-opt/obj/snes/impls/ksponly/ksponly.o > CC arch-linux-c-opt/obj/snes/impls/vi/ss/viss.o > CC arch-linux-c-opt/obj/snes/impls/vi/vi.o > CC arch-linux-c-opt/obj/snes/impls/tr/tr.o > CC arch-linux-c-opt/obj/snes/impls/composite/snescomposite.o > CC arch-linux-c-opt/obj/snes/impls/nasm/aspin.o > CC arch-linux-c-opt/obj/snes/impls/vi/rs/virs.o > CC arch-linux-c-opt/obj/snes/impls/nasm/ftn-auto/nasmf.o > CC arch-linux-c-opt/obj/snes/impls/ngmres/ftn-auto/snesngmresf.o > CC arch-linux-c-opt/obj/snes/impls/multiblock/multiblock.o > CC arch-linux-c-opt/obj/snes/impls/ngmres/anderson.o > CC arch-linux-c-opt/obj/snes/impls/python/ftn-custom/zpythonsf.o > CC arch-linux-c-opt/obj/snes/impls/python/pythonsnes.o > CC arch-linux-c-opt/obj/snes/impls/ngmres/ngmresfunc.o > CC arch-linux-c-opt/obj/snes/interface/snes.o > CC arch-linux-c-opt/obj/snes/impls/ncg/ftn-auto/snesncgf.o > CC arch-linux-c-opt/obj/snes/impls/ngmres/snesngmres.o > CC arch-linux-c-opt/obj/snes/impls/ls/ls.o > CC arch-linux-c-opt/obj/snes/mf/ftn-auto/snesmfjf.o > CC arch-linux-c-opt/obj/snes/mf/snesmfj.o > CC arch-linux-c-opt/obj/snes/impls/ncg/snesncg.o > CC arch-linux-c-opt/obj/snes/impls/nasm/nasm.o > CC arch-linux-c-opt/obj/snes/impls/ms/ms.o > CC arch-linux-c-opt/obj/ts/utils/dmnetworkts.o > CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-custom/zlandaucreate.o > CC arch-linux-c-opt/obj/ts/utils/dmdats.o > CC arch-linux-c-opt/obj/ts/utils/dmlocalts.o > CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-auto/plexlandf.o > CC arch-linux-c-opt/obj/ts/event/ftn-auto/tseventf.o > CC arch-linux-c-opt/obj/ts/utils/ftn-auto/dmplextsf.o > CC arch-linux-c-opt/obj/ts/utils/dmplexts.o > CC arch-linux-c-opt/obj/ts/utils/tsconvest.o > CC arch-linux-c-opt/obj/ts/utils/dmts.o > CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-custom/ztrajf.o > CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-auto/trajf.o > CC arch-linux-c-opt/obj/ts/trajectory/utils/reconstruct.o > CC arch-linux-c-opt/obj/ts/trajectory/impls/singlefile/singlefile.o > CC arch-linux-c-opt/obj/ts/trajectory/impls/visualization/trajvisualization.o > CC arch-linux-c-opt/obj/ts/trajectory/impls/basic/trajbasic.o > CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-custom/ztsadaptf.o > CC arch-linux-c-opt/obj/ts/event/tsevent.o > CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-auto/tsadaptf.o > CC arch-linux-c-opt/obj/ts/trajectory/interface/traj.o > CC arch-linux-c-opt/obj/ts/adapt/impls/history/adapthist.o > CC arch-linux-c-opt/obj/ts/adapt/impls/history/ftn-auto/adapthistf.o > CC arch-linux-c-opt/obj/ts/adapt/impls/none/adaptnone.o > CC arch-linux-c-opt/obj/ts/adapt/impls/glee/adaptglee.o > CC arch-linux-c-opt/obj/ts/adapt/impls/basic/adaptbasic.o > CC arch-linux-c-opt/obj/ts/adapt/impls/cfl/adaptcfl.o > CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-custom/zadaptdspf.o > CC arch-linux-c-opt/obj/ts/adapt/interface/tsadapt.o > CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-auto/adaptdspf.o > CC arch-linux-c-opt/obj/ts/interface/tscreate.o > CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/adaptdsp.o > CC arch-linux-c-opt/obj/ts/interface/dlregists.o > CC arch-linux-c-opt/obj/ts/trajectory/impls/memory/trajmemory.o > CC arch-linux-c-opt/obj/ts/interface/tsreg.o > CC arch-linux-c-opt/obj/ts/interface/tseig.o > CC arch-linux-c-opt/obj/ts/interface/tshistory.o > CC arch-linux-c-opt/obj/ts/interface/tsregall.o > CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztscreatef.o > CC arch-linux-c-opt/obj/ts/interface/tsrhssplit.o > CC arch-linux-c-opt/obj/ts/interface/sensitivity/ftn-auto/tssenf.o > CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsregf.o > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-custom/zrkf.o > CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsf.o > CC arch-linux-c-opt/obj/ts/interface/ftn-auto/tsf.o > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-auto/rkf.o > CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-custom/zsspf.o > CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-auto/sspf.o > CC arch-linux-c-opt/obj/ts/impls/explicit/euler/euler.o > CC arch-linux-c-opt/obj/ts/interface/sensitivity/tssen.o > CC arch-linux-c-opt/obj/ts/interface/tsmon.o > CC arch-linux-c-opt/obj/ts/impls/rosw/ftn-custom/zroswf.o > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/mrk.o > CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ssp.o > CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-auto/arkimexf.o > CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-custom/zarkimexf.o > CC arch-linux-c-opt/obj/ts/impls/pseudo/ftn-auto/posindepf.o > CC arch-linux-c-opt/obj/ts/impls/pseudo/posindep.o > CC arch-linux-c-opt/obj/ts/impls/python/pythonts.o > CC arch-linux-c-opt/obj/ts/impls/symplectic/basicsymplectic/basicsymplectic.o > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/rk.o > CC arch-linux-c-opt/obj/ts/impls/python/ftn-custom/zpythontf.o > CC arch-linux-c-opt/obj/ts/impls/eimex/eimex.o > CC arch-linux-c-opt/obj/ts/impls/implicit/theta/ftn-auto/thetaf.o > CC arch-linux-c-opt/obj/ts/impls/mimex/mimex.o > CC arch-linux-c-opt/obj/ts/impls/rosw/rosw.o > CC arch-linux-c-opt/obj/ts/impls/glee/glee.o > CC arch-linux-c-opt/obj/ts/interface/ts.o > CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glleadapt.o > CC arch-linux-c-opt/obj/ts/impls/arkimex/arkimex.o > CC arch-linux-c-opt/obj/ts/impls/implicit/irk/irk.o > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha1f.o > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha1.o > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha2f.o > CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/ftn-auto/tsdiscgradf.o > CC arch-linux-c-opt/obj/ts/impls/bdf/ftn-auto/bdff.o > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha2.o > CC arch-linux-c-opt/obj/ts/characteristic/interface/mocregis.o > CC arch-linux-c-opt/obj/ts/characteristic/interface/ftn-auto/characteristicf.o > CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/tsdiscgrad.o > CC arch-linux-c-opt/obj/ts/characteristic/interface/slregis.o > CC arch-linux-c-opt/obj/ts/impls/multirate/mprk.o > CC arch-linux-c-opt/obj/ts/impls/implicit/theta/theta.o > CC arch-linux-c-opt/obj/ts/characteristic/impls/da/slda.o > CC arch-linux-c-opt/obj/ts/impls/bdf/bdf.o > CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/ftn-auto/blmvmf.o > CC arch-linux-c-opt/obj/tao/bound/impls/bqnls/bqnls.o > CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/blmvm.o > CC arch-linux-c-opt/obj/tao/bound/utils/isutil.o > CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/plexland.o > CC arch-linux-c-opt/obj/tao/bound/impls/tron/tron.o > CC arch-linux-c-opt/obj/ts/characteristic/interface/characteristic.o > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnls.o > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntl.o > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntr.o > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnkls.o > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktl.o > CC arch-linux-c-opt/obj/tao/pde_constrained/impls/lcl/lcl.o > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnk.o > CC arch-linux-c-opt/obj/tao/bound/impls/bncg/bncg.o > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktr.o > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/ftn-auto/bqnkf.o > CC arch-linux-c-opt/obj/tao/shell/ftn-auto/taoshellf.o > CC arch-linux-c-opt/obj/tao/shell/taoshell.o > CC arch-linux-c-opt/obj/tao/matrix/submatfree.o > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnk.o > CC arch-linux-c-opt/obj/tao/matrix/adamat.o > CC arch-linux-c-opt/obj/tao/quadratic/impls/gpcg/gpcg.o > CC arch-linux-c-opt/obj/tao/constrained/impls/almm/ftn-auto/almmutilsf.o > CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almmutils.o > CC arch-linux-c-opt/obj/tao/quadratic/impls/bqpip/bqpip.o > CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-auto/admmf.o > CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glle.o > CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-custom/zadmmf.o > CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssls.o > CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssfls.o > CC arch-linux-c-opt/obj/tao/linesearch/interface/dlregis_taolinesearch.o > CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssils.o > CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almm.o > CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asfls.o > CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asils.o > CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-auto/taolinesearchf.o > CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-custom/ztaolinesearchf.o > CC arch-linux-c-opt/obj/tao/constrained/impls/admm/admm.o > CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/ipm.o > CC arch-linux-c-opt/obj/tao/linesearch/impls/gpcglinesearch/gpcglinesearch.o > CC arch-linux-c-opt/obj/tao/linesearch/impls/unit/unit.o > CC arch-linux-c-opt/obj/tao/linesearch/impls/morethuente/morethuente.o > CC arch-linux-c-opt/obj/tao/snes/taosnes.o > CC arch-linux-c-opt/obj/tao/linesearch/interface/taolinesearch.o > CC arch-linux-c-opt/obj/tao/linesearch/impls/armijo/armijo.o > CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-auto/brgnf.o > CC arch-linux-c-opt/obj/tao/linesearch/impls/owarmijo/owarmijo.o > CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-custom/zbrgnf.o > CC arch-linux-c-opt/obj/tao/interface/dlregistao.o > CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/gqt.o > CC arch-linux-c-opt/obj/tao/interface/fdiff.o > CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/brgn.o > CC arch-linux-c-opt/obj/tao/interface/taosolver_bounds.o > CC arch-linux-c-opt/obj/tao/interface/taosolverregi.o > CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/pdipm.o > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_boundsf.o > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_hjf.o > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_fgf.o > CC arch-linux-c-opt/obj/tao/interface/taosolver_fg.o > CC arch-linux-c-opt/obj/tao/python/pythontao.o > CC arch-linux-c-opt/obj/tao/python/ftn-custom/zpythontaof.o > CC arch-linux-c-opt/obj/tao/interface/taosolver_hj.o > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolverf.o > CC arch-linux-c-opt/obj/tao/interface/ftn-custom/ztaosolverf.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/lmvm/lmvm.o > CC arch-linux-c-opt/obj/tao/interface/taosolver.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/owlqn/owlqn.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/neldermead/neldermead.o > CC arch-linux-c-opt/obj/tao/util/ftn-auto/tao_utilf.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/cg/taocg.o > FC arch-linux-c-opt/obj/sys/classes/bag/f2003-src/fsrc/bagenum.o > FC arch-linux-c-opt/obj/sys/objects/f2003-src/fsrc/optionenum.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntr/ntr.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntl/ntl.o > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmswarmmod.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/bmrm/bmrm.o > CC arch-linux-c-opt/obj/tao/unconstrained/impls/nls/nls.o > CC arch-linux-c-opt/obj/tao/util/tao_util.o > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmdamod.o > CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/pounders.o > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmplexmod.o > FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspdefmod.o > FC arch-linux-c-opt/obj/ksp/f90-mod/petscpcmod.o > FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspmod.o > FC arch-linux-c-opt/obj/snes/f90-mod/petscsnesmod.o > FC arch-linux-c-opt/obj/ts/f90-mod/petsctsmod.o > FC arch-linux-c-opt/obj/tao/f90-mod/petsctaomod.o > CLINKER arch-linux-c-opt/lib/libpetsc.so.3.019.2 > *** Building SLEPc *** > Checking environment... done > Checking PETSc installation... done > Generating Fortran stubs... done > Checking LAPACK library... done > Checking SCALAPACK... done > Writing various configuration files... done > > ================================================================================ > SLEPc Configuration > ================================================================================ > > SLEPc directory: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > It is a git repository on branch: remotes/origin/jose/test-petsc-branch~2 > SLEPc prefix directory: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt > PETSc directory: > /home/vrkaka/SLlibs/petsc > It is a git repository on branch: main > Architecture "arch-linux-c-opt" with double precision real numbers > SCALAPACK from SCALAPACK linked by PETSc > > xxx==========================================================================xxx > Configure stage complete. Now build the SLEPc library with: > make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt > xxx==========================================================================xxx > > ========================================== > Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:20:55 +0300 > Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux > ----------------------------------------- > Using SLEPc directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > Using PETSc directory: /home/vrkaka/SLlibs/petsc > Using PETSc arch: arch-linux-c-opt > ----------------------------------------- > SLEPC_VERSION_RELEASE 0 > SLEPC_VERSION_MAJOR 3 > SLEPC_VERSION_MINOR 19 > SLEPC_VERSION_SUBMINOR 0 > SLEPC_VERSION_DATE "unknown" > SLEPC_VERSION_GIT "unknown" > SLEPC_VERSION_DATE_GIT "unknown" > ----------------------------------------- > Using SLEPc configure options: --prefix=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt > Using SLEPc configuration flags: > #define SLEPC_PETSC_DIR "/home/vrkaka/SLlibs/petsc" > #define SLEPC_PETSC_ARCH "arch-linux-c-opt" > #define SLEPC_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc" > #define SLEPC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" > #define SLEPC_VERSION_GIT "v3.19.0-34-ga2e6dffce" > #define SLEPC_VERSION_DATE_GIT "2023-05-09 07:30:59 +0000" > #define SLEPC_VERSION_BRANCH_GIT "remotes/origin/jose/test-petsc-branch~2" > #define SLEPC_HAVE_SCALAPACK 1 > #define SLEPC_SCALAPACK_HAVE_UNDERSCORE 1 > #define SLEPC_HAVE_PACKAGES ":scalapack:" > ----------------------------------------- > PETSC_VERSION_RELEASE 0 > PETSC_VERSION_MAJOR 3 > PETSC_VERSION_MINOR 19 > PETSC_VERSION_SUBMINOR 2 > PETSC_VERSION_DATE "unknown" > PETSC_VERSION_GIT "unknown" > PETSC_VERSION_DATE_GIT "unknown" > ----------------------------------------- > Using PETSc configure options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 > Using PETSc configuration flags: > #define PETSC_ARCH "arch-linux-c-opt" > #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) > #define PETSC_BLASLAPACK_UNDERSCORE 1 > #define PETSC_CLANGUAGE_C 1 > #define PETSC_CXX_RESTRICT __restrict > #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) > #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) > #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) > #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) > #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" > #define PETSC_DIR_SEPARATOR '/' > #define PETSC_FORTRAN_CHARLEN_T size_t > #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 > #define PETSC_FUNCTION_NAME_C __func__ > #define PETSC_FUNCTION_NAME_CXX __func__ > #define PETSC_HAVE_ACCESS 1 > #define PETSC_HAVE_ATOLL 1 > #define PETSC_HAVE_ATTRIBUTEALIGNED 1 > #define PETSC_HAVE_BUILTIN_EXPECT 1 > #define PETSC_HAVE_BZERO 1 > #define PETSC_HAVE_C99_COMPLEX 1 > #define PETSC_HAVE_CLOCK 1 > #define PETSC_HAVE_CXX 1 > #define PETSC_HAVE_CXX_ATOMIC 1 > #define PETSC_HAVE_CXX_COMPLEX 1 > #define PETSC_HAVE_CXX_COMPLEX_FIX 1 > #define PETSC_HAVE_CXX_DIALECT_CXX11 1 > #define PETSC_HAVE_CXX_DIALECT_CXX14 1 > #define PETSC_HAVE_CXX_DIALECT_CXX17 1 > #define PETSC_HAVE_CXX_DIALECT_CXX20 1 > #define PETSC_HAVE_DLADDR 1 > #define PETSC_HAVE_DLCLOSE 1 > #define PETSC_HAVE_DLERROR 1 > #define PETSC_HAVE_DLFCN_H 1 > #define PETSC_HAVE_DLOPEN 1 > #define PETSC_HAVE_DLSYM 1 > #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 > #define PETSC_HAVE_DRAND48 1 > #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 > #define PETSC_HAVE_ERF 1 > #define PETSC_HAVE_EXECUTABLE_EXPORT 1 > #define PETSC_HAVE_EXODUSII 1 > #define PETSC_HAVE_FCNTL_H 1 > #define PETSC_HAVE_FENV_H 1 > #define PETSC_HAVE_FE_VALUES 1 > #define PETSC_HAVE_FLOAT_H 1 > #define PETSC_HAVE_FORK 1 > #define PETSC_HAVE_FORTRAN 1 > #define PETSC_HAVE_FORTRAN_FLUSH 1 > #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 > #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 > #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 > #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 > #define PETSC_HAVE_GETCWD 1 > #define PETSC_HAVE_GETDOMAINNAME 1 > #define PETSC_HAVE_GETHOSTBYNAME 1 > #define PETSC_HAVE_GETHOSTNAME 1 > #define PETSC_HAVE_GETPAGESIZE 1 > #define PETSC_HAVE_GETRUSAGE 1 > #define PETSC_HAVE_HDF5 1 > #define PETSC_HAVE_IMMINTRIN_H 1 > #define PETSC_HAVE_INTTYPES_H 1 > #define PETSC_HAVE_ISINF 1 > #define PETSC_HAVE_ISNAN 1 > #define PETSC_HAVE_ISNORMAL 1 > #define PETSC_HAVE_LGAMMA 1 > #define PETSC_HAVE_LOG2 1 > #define PETSC_HAVE_LSEEK 1 > #define PETSC_HAVE_MALLOC_H 1 > #define PETSC_HAVE_MED 1 > #define PETSC_HAVE_MEMMOVE 1 > #define PETSC_HAVE_METIS 1 > #define PETSC_HAVE_MKSTEMP 1 > #define PETSC_HAVE_MMAP 1 > #define PETSC_HAVE_MPICH 1 > #define PETSC_HAVE_MPICH_NUMVERSION 40101300 > #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 > #define PETSC_HAVE_MPIIO 1 > #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 > #define PETSC_HAVE_MPI_COMBINER_DUP 1 > #define PETSC_HAVE_MPI_COMBINER_NAMED 1 > #define PETSC_HAVE_MPI_F90MODULE 1 > #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 > #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 > #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 > #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 > #define PETSC_HAVE_MPI_INIT_THREAD 1 > #define PETSC_HAVE_MPI_INT64_T 1 > #define PETSC_HAVE_MPI_LARGE_COUNT 1 > #define PETSC_HAVE_MPI_LONG_DOUBLE 1 > #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 > #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 > #define PETSC_HAVE_MPI_ONE_SIDED 1 > #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 > #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 > #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 > #define PETSC_HAVE_MPI_RGET 1 > #define PETSC_HAVE_MPI_WIN_CREATE 1 > #define PETSC_HAVE_MUMPS 1 > #define PETSC_HAVE_NANOSLEEP 1 > #define PETSC_HAVE_NETCDF 1 > #define PETSC_HAVE_NETDB_H 1 > #define PETSC_HAVE_NETINET_IN_H 1 > #define PETSC_HAVE_OPENBLAS 1 > #define PETSC_HAVE_OPENMP 1 > #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" > #define PETSC_HAVE_PNETCDF 1 > #define PETSC_HAVE_POPEN 1 > #define PETSC_HAVE_POSIX_MEMALIGN 1 > #define PETSC_HAVE_PTHREAD 1 > #define PETSC_HAVE_PWD_H 1 > #define PETSC_HAVE_RAND 1 > #define PETSC_HAVE_READLINK 1 > #define PETSC_HAVE_REALPATH 1 > #define PETSC_HAVE_REAL___FLOAT128 1 > #define PETSC_HAVE_REGEX 1 > #define PETSC_HAVE_RTLD_GLOBAL 1 > #define PETSC_HAVE_RTLD_LAZY 1 > #define PETSC_HAVE_RTLD_LOCAL 1 > #define PETSC_HAVE_RTLD_NOW 1 > #define PETSC_HAVE_SCALAPACK 1 > #define PETSC_HAVE_SETJMP_H 1 > #define PETSC_HAVE_SLEEP 1 > #define PETSC_HAVE_SLEPC 1 > #define PETSC_HAVE_SNPRINTF 1 > #define PETSC_HAVE_SOCKET 1 > #define PETSC_HAVE_SOWING 1 > #define PETSC_HAVE_SO_REUSEADDR 1 > #define PETSC_HAVE_STDATOMIC_H 1 > #define PETSC_HAVE_STDINT_H 1 > #define PETSC_HAVE_STRCASECMP 1 > #define PETSC_HAVE_STRINGS_H 1 > #define PETSC_HAVE_STRUCT_SIGACTION 1 > #define PETSC_HAVE_SYS_PARAM_H 1 > #define PETSC_HAVE_SYS_PROCFS_H 1 > #define PETSC_HAVE_SYS_RESOURCE_H 1 > #define PETSC_HAVE_SYS_SOCKET_H 1 > #define PETSC_HAVE_SYS_TIMES_H 1 > #define PETSC_HAVE_SYS_TIME_H 1 > #define PETSC_HAVE_SYS_TYPES_H 1 > #define PETSC_HAVE_SYS_UTSNAME_H 1 > #define PETSC_HAVE_SYS_WAIT_H 1 > #define PETSC_HAVE_TAU_PERFSTUBS 1 > #define PETSC_HAVE_TGAMMA 1 > #define PETSC_HAVE_TIME 1 > #define PETSC_HAVE_TIME_H 1 > #define PETSC_HAVE_UNAME 1 > #define PETSC_HAVE_UNISTD_H 1 > #define PETSC_HAVE_USLEEP 1 > #define PETSC_HAVE_VA_COPY 1 > #define PETSC_HAVE_VSNPRINTF 1 > #define PETSC_HAVE_XMMINTRIN_H 1 > #define PETSC_HDF5_HAVE_PARALLEL 1 > #define PETSC_HDF5_HAVE_ZLIB 1 > #define PETSC_INTPTR_T intptr_t > #define PETSC_INTPTR_T_FMT "#" PRIxPTR > #define PETSC_IS_COLORING_MAX USHRT_MAX > #define PETSC_IS_COLORING_VALUE_TYPE short > #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 > #define PETSC_LEVEL1_DCACHE_LINESIZE 64 > #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" > #define PETSC_MAX_PATH_LEN 4096 > #define PETSC_MEMALIGN 16 > #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" > #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT > #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" > #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA > #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 > #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 > #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 > #define PETSC_PYTHON_EXE "/usr/bin/python3" > #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) > #define PETSC_REPLACE_DIR_SEPARATOR '\\' > #define PETSC_SIGNAL_CAST > #define PETSC_SIZEOF_INT 4 > #define PETSC_SIZEOF_LONG 8 > #define PETSC_SIZEOF_LONG_LONG 8 > #define PETSC_SIZEOF_SIZE_T 8 > #define PETSC_SIZEOF_VOID_P 8 > #define PETSC_SLSUFFIX "so" > #define PETSC_UINTPTR_T uintptr_t > #define PETSC_UINTPTR_T_FMT "#" PRIxPTR > #define PETSC_UNUSED __attribute((unused)) > #define PETSC_USE_AVX512_KERNELS 1 > #define PETSC_USE_BACKWARD_LOOP 1 > #define PETSC_USE_CTABLE 1 > #define PETSC_USE_DMLANDAU_2D 1 > #define PETSC_USE_INFO 1 > #define PETSC_USE_ISATTY 1 > #define PETSC_USE_LOG 1 > #define PETSC_USE_MALLOC_COALESCED 1 > #define PETSC_USE_PROC_FOR_SIZE 1 > #define PETSC_USE_REAL_DOUBLE 1 > #define PETSC_USE_SHARED_LIBRARIES 1 > #define PETSC_USE_SINGLE_LIBRARY 1 > #define PETSC_USE_SOCKET_VIEWER 1 > #define PETSC_USE_VISIBILITY_C 1 > #define PETSC_USE_VISIBILITY_CXX 1 > #define PETSC_USING_64BIT_PTR 1 > #define PETSC_USING_F2003 1 > #define PETSC_USING_F90FREEFORM 1 > #define PETSC_VERSION_BRANCH_GIT "main" > #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" > #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" > #define PETSC__BSD_SOURCE 1 > #define PETSC__DEFAULT_SOURCE 1 > #define PETSC__GNU_SOURCE 1 > ----------------------------------------- > Using C/C++ include paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > Using Fortran include/module paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > ----------------------------------------- > Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc > Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 > Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 > ----------------------------------------- > Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -lslepc -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ > ------------------------------------------ > Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec > ------------------------------------------ > Using MAKE: /usr/bin/gmake > Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > ========================================== > /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= slepc_libs > /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegen.py --petsc-arch=arch-linux-c-opt --pkg-dir=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc --pkg-name=slepc --pkg-pkgs=sys,eps,svd,pep,nep,mfn,lme --pkg-arch=arch-linux-c-opt > CC arch-linux-c-opt/obj/sys/ftn-auto/slepcscf.o > CC arch-linux-c-opt/obj/sys/ftn-auto/slepcinitf.o > CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_startf.o > CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_start.o > CC arch-linux-c-opt/obj/sys/dlregisslepc.o > CC arch-linux-c-opt/obj/sys/slepcutil.o > CC arch-linux-c-opt/obj/sys/slepcinit.o > CC arch-linux-c-opt/obj/sys/slepcsc.o > CC arch-linux-c-opt/obj/sys/slepccontour.o > Use "/usr/bin/gmake V=1" to see verbose compile lines, "/usr/bin/gmake V=0" to suppress. > FC arch-linux-c-opt/obj/sys/f90-mod/slepcsysmod.o > CC arch-linux-c-opt/obj/sys/vec/ftn-auto/vecutilf.o > CC arch-linux-c-opt/obj/sys/ftn-custom/zslepcutil.o > CC arch-linux-c-opt/obj/sys/vec/pool.o > CC arch-linux-c-opt/obj/sys/mat/ftn-auto/matutilf.o > CC arch-linux-c-opt/obj/sys/vec/vecutil.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-custom/zpolygon.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-auto/rgpolygonf.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/ftn-auto/rgringf.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-custom/zellipse.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-auto/rgellipsef.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/rgellipse.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-custom/zinterval.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-auto/rgintervalf.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/rgring.o > CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgregis.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/rgpolygon.o > CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-auto/rgbasicf.o > CC arch-linux-c-opt/obj/sys/mat/matutil.o > CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-custom/zrgf.o > CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgbasic.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/ftn-auto/fnphif.o > CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/rginterval.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/ftn-auto/fncombinef.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/fnphi.o > CC arch-linux-c-opt/obj/sys/vec/veccomp.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/ftn-custom/zrational.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/sqrt/fnsqrt.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/fnutil.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/fncombine.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/log/fnlog.o > CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnregis.o > CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-auto/fnbasicf.o > CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-custom/zfnf.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/invsqrt/fninvsqrt.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/fnrational.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/ftn-auto/cayleyf.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/ftn-auto/precondf.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/cayley.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/ftn-auto/filterf.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/precond.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/sinvert/sinvert.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filter.o > CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnbasic.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/shift/shift.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/shell.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-auto/shellf.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-custom/zshell.o > CC arch-linux-c-opt/obj/sys/classes/fn/impls/exp/fnexp.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/stregis.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsetf.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/stset.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stfuncf.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-custom/zstf.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/stshellmat.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stslesf.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/stfunc.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/stsles.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsolvef.o > CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/ftn-auto/bvtensorf.o > CC arch-linux-c-opt/obj/sys/classes/st/interface/stsolve.o > CC arch-linux-c-opt/obj/sys/classes/bv/impls/contiguous/contig.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbiorthog.o > CC arch-linux-c-opt/obj/sys/classes/bv/impls/mat/bvmat.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvblas.o > CC arch-linux-c-opt/obj/sys/classes/bv/impls/svec/svec.o > CC arch-linux-c-opt/obj/sys/classes/bv/impls/vecs/vecs.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvkrylov.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvfunc.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvregis.o > CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/bvtensor.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbasic.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvcontour.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-custom/zbvf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbiorthogf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbasicf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvcontourf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvfuncf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvglobalf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvkrylovf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvopsf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvorthogf.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvops.o > CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filtlan.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvglobal.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvlapack.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/ftn-auto/dshsvdf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/ftn-auto/dssvdf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/dsutil.o > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvorthog.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-auto/dspepf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-custom/zdspepf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/ftn-auto/dsnepf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghep/dsghep.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhepts/dsnhepts.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/dssvd.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/gnhep/dsgnhep.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/dspep.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhep/dsnhep.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/dshsvd.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/dsnep.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/hz.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dmerg2.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dlaed3m.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/ftn-auto/dsgsvdf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsbtdc.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsrtdf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dibtdc.o > CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsbasicf.o > CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsbasic.o > CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-custom/zdsf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/invit.o > CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsopsf.o > CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsops.o > CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsprivf.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/dshep.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/dsghiep.o > CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/ftn-auto/lobpcgf.o > CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/ftn-auto/rqcgf.o > CC arch-linux-c-opt/obj/eps/impls/lyapii/ftn-auto/lyapiif.o > CC arch-linux-c-opt/obj/sys/classes/ds/interface/dspriv.o > CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/dsgsvd.o > CC arch-linux-c-opt/obj/eps/impls/subspace/subspace.o > CC arch-linux-c-opt/obj/eps/impls/external/scalapack/scalapack.o > CC arch-linux-c-opt/obj/eps/impls/lapack/lapack.o > CC arch-linux-c-opt/obj/eps/impls/ciss/ftn-auto/cissf.o > CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/rqcg.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdschm.o > CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/lobpcg.o > CC arch-linux-c-opt/obj/eps/impls/davidson/davidson.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdtestconv.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdinitv.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdgd2.o > CC arch-linux-c-opt/obj/eps/impls/lyapii/lyapii.o > CC arch-linux-c-opt/obj/eps/impls/davidson/jd/ftn-auto/jdf.o > CC arch-linux-c-opt/obj/eps/impls/davidson/gd/ftn-auto/gdf.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdcalcpairs.o > CC arch-linux-c-opt/obj/eps/impls/davidson/gd/gd.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdutils.o > CC arch-linux-c-opt/obj/eps/impls/davidson/jd/jd.o > CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/ftn-auto/lanczosf.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdupdatev.o > CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/ftn-auto/arnoldif.o > CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/arnoldi.o > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-indef.o > CC arch-linux-c-opt/obj/eps/impls/krylov/epskrylov.o > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdimprovex.o > CC arch-linux-c-opt/obj/eps/impls/ciss/ciss.o > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-custom/zkrylovschurf.o > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.o > CC arch-linux-c-opt/obj/eps/impls/power/ftn-auto/powerf.o > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-twosided.o > CC arch-linux-c-opt/obj/eps/interface/dlregiseps.o > CC arch-linux-c-opt/obj/eps/interface/epsbasic.o > CC arch-linux-c-opt/obj/eps/interface/epsregis.o > CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/lanczos.o > CC arch-linux-c-opt/obj/eps/interface/epsdefault.o > CC arch-linux-c-opt/obj/eps/interface/epsmon.o > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/krylovschur.o > CC arch-linux-c-opt/obj/eps/interface/epsopts.o > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsbasicf.o > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsdefaultf.o > CC arch-linux-c-opt/obj/eps/interface/epssetup.o > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsmonf.o > CC arch-linux-c-opt/obj/eps/impls/power/power.o > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssetupf.o > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsviewf.o > CC arch-linux-c-opt/obj/eps/interface/epssolve.o > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsoptsf.o > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssolvef.o > CC arch-linux-c-opt/obj/eps/interface/ftn-custom/zepsf.o > CC arch-linux-c-opt/obj/svd/impls/lanczos/ftn-auto/gklanczosf.o > CC arch-linux-c-opt/obj/svd/impls/cross/ftn-auto/crossf.o > CC arch-linux-c-opt/obj/eps/interface/epsview.o > CC arch-linux-c-opt/obj/svd/impls/external/scalapack/svdscalap.o > CC arch-linux-c-opt/obj/svd/impls/randomized/rsvd.o > CC arch-linux-c-opt/obj/svd/impls/trlanczos/ftn-auto/trlanczosf.o > CC arch-linux-c-opt/obj/svd/impls/cyclic/ftn-auto/cyclicf.o > CC arch-linux-c-opt/obj/svd/interface/dlregissvd.o > CC arch-linux-c-opt/obj/svd/interface/svdbasic.o > CC arch-linux-c-opt/obj/svd/impls/lapack/svdlapack.o > CC arch-linux-c-opt/obj/svd/impls/lanczos/gklanczos.o > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-slice.o > CC arch-linux-c-opt/obj/svd/interface/svddefault.o > CC arch-linux-c-opt/obj/svd/impls/cross/cross.o > CC arch-linux-c-opt/obj/svd/interface/svdregis.o > CC arch-linux-c-opt/obj/svd/interface/svdmon.o > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdbasicf.o > CC arch-linux-c-opt/obj/svd/interface/svdopts.o > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svddefaultf.o > CC arch-linux-c-opt/obj/svd/interface/svdsetup.o > CC arch-linux-c-opt/obj/svd/interface/svdsolve.o > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdmonf.o > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdoptsf.o > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsetupf.o > CC arch-linux-c-opt/obj/svd/interface/ftn-custom/zsvdf.o > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsolvef.o > CC arch-linux-c-opt/obj/svd/interface/svdview.o > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdviewf.o > CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/ftn-auto/qarnoldif.o > CC arch-linux-c-opt/obj/pep/impls/peputils.o > CC arch-linux-c-opt/obj/svd/impls/cyclic/cyclic.o > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/qslicef.o > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-custom/zstoarf.o > CC arch-linux-c-opt/obj/pep/impls/krylov/pepkrylov.o > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/stoarf.o > CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ftn-auto/ptoarf.o > CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/qarnoldi.o > CC arch-linux-c-opt/obj/pep/impls/linear/ftn-auto/linearf.o > CC arch-linux-c-opt/obj/pep/impls/linear/qeplin.o > CC arch-linux-c-opt/obj/pep/impls/jd/ftn-auto/pjdf.o > CC arch-linux-c-opt/obj/pep/interface/dlregispep.o > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/stoar.o > CC arch-linux-c-opt/obj/pep/interface/pepbasic.o > CC arch-linux-c-opt/obj/pep/interface/pepmon.o > CC arch-linux-c-opt/obj/pep/impls/linear/linear.o > CC arch-linux-c-opt/obj/pep/interface/pepdefault.o > CC arch-linux-c-opt/obj/svd/impls/trlanczos/trlanczos.o > CC arch-linux-c-opt/obj/pep/interface/pepregis.o > CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ptoar.o > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepbasicf.o > CC arch-linux-c-opt/obj/pep/interface/pepopts.o > CC arch-linux-c-opt/obj/pep/interface/pepsetup.o > CC arch-linux-c-opt/obj/pep/interface/pepsolve.o > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepdefaultf.o > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepmonf.o > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepoptsf.o > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsetupf.o > CC arch-linux-c-opt/obj/pep/interface/ftn-custom/zpepf.o > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepviewf.o > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsolvef.o > CC arch-linux-c-opt/obj/pep/interface/peprefine.o > CC arch-linux-c-opt/obj/pep/interface/pepview.o > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/qslice.o > CC arch-linux-c-opt/obj/nep/impls/slp/ftn-auto/slpf.o > CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-custom/znleigsf.o > CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigs-fullbf.o > CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigsf.o > CC arch-linux-c-opt/obj/nep/impls/interpol/ftn-auto/interpolf.o > CC arch-linux-c-opt/obj/nep/impls/slp/slp.o > CC arch-linux-c-opt/obj/nep/impls/narnoldi/ftn-auto/narnoldif.o > CC arch-linux-c-opt/obj/nep/impls/slp/slp-twosided.o > CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs-fullb.o > CC arch-linux-c-opt/obj/nep/impls/interpol/interpol.o > CC arch-linux-c-opt/obj/nep/impls/rii/ftn-auto/riif.o > CC arch-linux-c-opt/obj/nep/interface/dlregisnep.o > CC arch-linux-c-opt/obj/nep/impls/narnoldi/narnoldi.o > CC arch-linux-c-opt/obj/pep/impls/krylov/toar/nrefine.o > CC arch-linux-c-opt/obj/nep/interface/nepdefault.o > CC arch-linux-c-opt/obj/nep/interface/nepregis.o > CC arch-linux-c-opt/obj/nep/impls/rii/rii.o > CC arch-linux-c-opt/obj/nep/interface/nepbasic.o > CC arch-linux-c-opt/obj/nep/interface/nepmon.o > CC arch-linux-c-opt/obj/pep/impls/jd/pjd.o > CC arch-linux-c-opt/obj/nep/interface/nepresolv.o > CC arch-linux-c-opt/obj/nep/interface/nepopts.o > CC arch-linux-c-opt/obj/nep/impls/nepdefl.o > CC arch-linux-c-opt/obj/nep/interface/nepsetup.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepdefaultf.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepbasicf.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepmonf.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepoptsf.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepresolvf.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsetupf.o > CC arch-linux-c-opt/obj/nep/interface/nepsolve.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsolvef.o > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepviewf.o > CC arch-linux-c-opt/obj/nep/interface/ftn-custom/znepf.o > CC arch-linux-c-opt/obj/mfn/interface/dlregismfn.o > CC arch-linux-c-opt/obj/mfn/impls/krylov/mfnkrylov.o > CC arch-linux-c-opt/obj/nep/interface/nepview.o > CC arch-linux-c-opt/obj/nep/interface/neprefine.o > CC arch-linux-c-opt/obj/mfn/interface/mfnmon.o > CC arch-linux-c-opt/obj/mfn/interface/mfnregis.o > CC arch-linux-c-opt/obj/mfn/impls/expokit/mfnexpokit.o > CC arch-linux-c-opt/obj/mfn/interface/mfnopts.o > CC arch-linux-c-opt/obj/mfn/interface/mfnbasic.o > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnbasicf.o > CC arch-linux-c-opt/obj/mfn/interface/mfnsolve.o > CC arch-linux-c-opt/obj/mfn/interface/mfnsetup.o > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnmonf.o > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnoptsf.o > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsetupf.o > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsolvef.o > CC arch-linux-c-opt/obj/mfn/interface/ftn-custom/zmfnf.o > CC arch-linux-c-opt/obj/lme/interface/dlregislme.o > CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs.o > CC arch-linux-c-opt/obj/lme/interface/lmeregis.o > CC arch-linux-c-opt/obj/lme/interface/lmemon.o > CC arch-linux-c-opt/obj/lme/impls/krylov/lmekrylov.o > CC arch-linux-c-opt/obj/lme/interface/lmebasic.o > CC arch-linux-c-opt/obj/lme/interface/lmeopts.o > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmemonf.o > CC arch-linux-c-opt/obj/lme/interface/lmesetup.o > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmebasicf.o > CC arch-linux-c-opt/obj/lme/interface/lmesolve.o > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmeoptsf.o > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesolvef.o > CC arch-linux-c-opt/obj/lme/interface/lmedense.o > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesetupf.o > CC arch-linux-c-opt/obj/lme/interface/ftn-custom/zlmef.o > FC arch-linux-c-opt/obj/sys/classes/rg/f90-mod/slepcrgmod.o > FC arch-linux-c-opt/obj/sys/classes/bv/f90-mod/slepcbvmod.o > FC arch-linux-c-opt/obj/sys/classes/fn/f90-mod/slepcfnmod.o > FC arch-linux-c-opt/obj/lme/f90-mod/slepclmemod.o > FC arch-linux-c-opt/obj/sys/classes/ds/f90-mod/slepcdsmod.o > FC arch-linux-c-opt/obj/sys/classes/st/f90-mod/slepcstmod.o > FC arch-linux-c-opt/obj/mfn/f90-mod/slepcmfnmod.o > FC arch-linux-c-opt/obj/eps/f90-mod/slepcepsmod.o > FC arch-linux-c-opt/obj/svd/f90-mod/slepcsvdmod.o > FC arch-linux-c-opt/obj/pep/f90-mod/slepcpepmod.o > FC arch-linux-c-opt/obj/nep/f90-mod/slepcnepmod.o > CLINKER arch-linux-c-opt/lib/libslepc.so.3.019.0 > Now to install the library do: > make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc install > ========================================= > *** Installing SLEPc *** > *** Installing SLEPc at prefix location: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt *** > ==================================== > Install complete. > Now to check if the libraries are working do (in current directory): > make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check > ==================================== > /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc install-builtafterslepc > /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc slepc4py-install > gmake[6]: Nothing to be done for 'slepc4py-install'. > ========================================= > Now to check if the libraries are working do: > make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check > ========================================= > > > > > and here is the cmake message when configuring the project: > > vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ cmake .. > -- The CXX compiler identification is GNU 11.3.0 > -- Detecting CXX compiler ABI info > -- Detecting CXX compiler ABI info - done > -- Check for working CXX compiler: /usr/bin/c++ - skipped > -- Detecting CXX compile features > -- Detecting CXX compile features - done > -- MPI headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -- MPI library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmpich.so > -- GMSH HEADERS NOT FOUND (OPTIONAL) > -- GMSH LIBRARY NOT FOUND (OPTIONAL) > -- Ipopt headers found at /home/vrkaka/Ipopt/installation/include/coin-or > -- Ipopt library found at /home/vrkaka/Ipopt/installation/lib/libipopt.so > -- Blas header cblas.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -- Blas library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libopenblas.so > -- Metis headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -- Metis library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmetis.so > -- Mumps headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -- Mumps library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libcmumps.a > -- Petsc header petsc.h found at /home/vrkaka/SLlibs/petsc/include > -- Petsc header petscconf.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -- Petsc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libpetsc.so > -- Slepc headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -- Slepc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libslepc.so > -- Configuring done > -- Generating done > -- Build files have been written to: /home/vrkaka/sparselizardipopt/build > > > > After that building the project with cmake goes fine and a simple mpi test works > > > -Kalle -------------- next part -------------- An HTML attachment was scrubbed... URL: From bldenton at buffalo.edu Wed Jun 7 10:04:41 2023 From: bldenton at buffalo.edu (Brandon Denton) Date: Wed, 7 Jun 2023 11:04:41 -0400 Subject: [petsc-users] PETSc :: FEM Help Message-ID: Good Morning, I'm trying to verify that the CAD -> PETSc/DMPlex methods I've developed can be used for FEM analyses using PETSc. Attached is my current attempt where I import a CAD STEP file to create a volumetric tetrahedral discretization (DMPlex), designate boundary condition points using DMLabels, and solve the Laplace problem (heat) with Dirichlet conditions on each end. At command line I indicate the STEP file with the -filename option and the dual space degree with -petscspace_degree 2. The run ends with either a SEGV Fault or a General MPI Communication Error. Could you please look over the attached file to tell me if what I'm doing to set up the FEM problem is wrong? Thank you in advance for your time and help. -Brandon TYPICAL ERROR MESSAGE [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: General MPI error [0]PETSC ERROR: MPI error 605109765 Invalid communicator, error stack: PMPI_Comm_get_attr(344): MPI_Comm_get_attr(comm=0x0, comm_keyval=-1539309568, attribute_val=0x7ffe75a58848, flag=0x7ffe75a58844) failed MPII_Comm_get_attr(257): MPIR_Comm_get_attr(comm=0x0, comm_keyval=-1539309568, attribute_val=0x7ffe75a58848, flag=0x7ffe75a58844) failed MPII_Comm_get_attr(53).: Invalid communicator [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! [0]PETSC ERROR: Option left: name:-dm_plex_refine_without_snap_to_geom value: 0 source: command line [0]PETSC ERROR: Option left: name:-dm_refine value: 1 source: command line [0]PETSC ERROR: Option left: name:-snes_monitor (no value) source: command line [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.18.5-1817-gd2497b8de4c GIT Date: 2023-05-22 18:44:03 +0000 [0]PETSC ERROR: ./thermal on a named XPS. by bdenton Wed Jun 7 11:03:43 2023 [0]PETSC ERROR: Configure options --with-make-np=16 --prefix=/mnt/c/Users/Brandon/software/libs/petsc/3.19.1-gitlab/gcc/11.2.0/mpich/3.4.2/openblas/0.3.17/opt --with-debugging=false --COPTFLAGS="-O3 -mavx" --CXXOPTFLAGS="-O3 -mavx" --FOPTFLAGS=-O3 --with-shared-libraries=1 --with-mpi-dir=/mnt/c/Users/Brandon/software/libs/mpich/3.4.2/gcc/11.2.0 --with-mumps=true --download-mumps=1 --with-metis=true --download-metis=1 --with-parmetis=true --download-parmetis=1 --with-superlu=true --download-superlu=1 --with-superludir=true --download-superlu_dist=1 --with-blacs=true --download-blacs=1 --with-scalapack=true --download-scalapack=1 --with-hypre=true --download-hypre=1 --with-hdf5-dir=/mnt/c/Users/Brandon/software/libs/hdf5/1.12.1/gcc/11.2.0 --with-valgrind-dir=/mnt/c/Users/Brandon/software/apps/valgrind/3.14.0 --with-blas-lib="[/mnt/c/Users/Brandon/software/libs/openblas/0.3.17/gcc/11.2.0/lib/libopenblas.so]" --with-lapack-lib="[/mnt/c/Users/Brandon/software/libs/openblas/0.3.17/gcc/11.2.0/lib/libopenblas.so]" --LDFLAGS= --with-tetgen=true --download-tetgen=1 --download-ctetgen=1 --download-opencascade=1 --download-egads [0]PETSC ERROR: #1 PetscObjectName() at /mnt/c/Users/Brandon/software/builddir/petsc-3.19.1-gitlab/src/sys/objects/pname.c:119 [0]PETSC ERROR: #2 PetscObjectGetName() at /mnt/c/Users/Brandon/software/builddir/petsc-3.19.1-gitlab/src/sys/objects/pgname.c:27 [0]PETSC ERROR: #3 PetscDSAddBoundary() at /mnt/c/Users/Brandon/software/builddir/petsc-3.19.1-gitlab/src/dm/dt/interface/dtds.c:3404 [0]PETSC ERROR: #4 DMAddBoundary() at /mnt/c/Users/Brandon/software/builddir/petsc-3.19.1-gitlab/src/dm/interface/dm.c:7828 [0]PETSC ERROR: #5 main() at /mnt/c/Users/Brandon/Documents/School/Dissertation/Software/EGADS-dev/thermal_v319/thermal_nozzle.c:173 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -dm_plex_geom_print_model 1 (source: command line) [0]PETSC ERROR: -dm_plex_geom_shape_opt 0 (source: command line) [0]PETSC ERROR: -dm_plex_refine_without_snap_to_geom 0 (source: command line) [0]PETSC ERROR: -dm_refine 1 (source: command line) [0]PETSC ERROR: -filename ./examples/Nozzle_example.stp (source: command line) [0]PETSC ERROR: -petscspace_degree 2 (source: command line) [0]PETSC ERROR: -snes_monitor (source: command line) [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_SELF, 98) - process 0 [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=98 : system msg for write_line failure : Bad file descriptor -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- static const char help[] = "Test of FEA DMPlex w/CAD functionality"; #include /* User-defined Work Context */ typedef struct { PetscInt dummy; char filename[PETSC_MAX_PATH_LEN]; // context containing CAD filename from command line } AppCtx; void f0_function(PetscInt dim, PetscInt Nf, PetscInt NfAux, const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar u[], const PetscScalar u_t[], const PetscScalar u_x[], const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar a[], const PetscScalar a_t[], const PetscScalar a_x[], PetscReal t, const PetscReal x[], PetscScalar f0[]) { f0[0] = 0.0; } void f1_function(PetscInt dim, PetscInt Nf, PetscInt NfAux, const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar u[], const PetscScalar u_t[], const PetscScalar u_x[], const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar a[], const PetscScalar a_t[], const PetscScalar a_x[], PetscReal t, const PetscReal x[], PetscScalar f1[]) { for(PetscInt d = 0; d < dim; ++d) f1[d] = u_x[d]; } void g3_uu_function(PetscInt dim, PetscInt Nf, PetscInt NfAux, const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar u[], const PetscScalar u_t[], const PetscScalar u_x[], const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar a[], const PetscScalar a_t[], const PetscScalar a_x[], PetscReal t, const PetscReal x[], PetscScalar g3[]) { for (PetscInt d=0; d < dim; ++d) g3[d*dim + d] = 1.0; } //void bc_inlet(PetscInt dim, PetscReal time, const PetscReal x[], PetscInt Nc, PetscScalar bcval[]) static PetscErrorCode bc_inlet(PetscInt dim, PetscReal time, const PetscReal x[], PetscInt Nc, PetscScalar *u, void *ctx) { u[0] = 1400.0; return PETSC_SUCCESS; //bcval[0] = 1400.0; } //void bc_outlet(PetscInt dim, PetscReal time, const PetscReal x[], PetscInt Nc, PetscScalar bcval[]) static PetscErrorCode bc_outlet(PetscInt dim, PetscReal time, const PetscReal x[], PetscInt Nc, PetscScalar *u, void *ctx) { u[0] = 100.0; return PETSC_SUCCESS; //bcval[0] = 100.0; } /* Procees Options - This should be removed once creation bug is fixed by Prof. Knepley */ static PetscErrorCode ProcessOptions(MPI_Comm comm, AppCtx *options) { PetscFunctionBeginUser; options->filename[0] = '\0'; PetscOptionsBegin(comm, "", "FEA DMPlex w/CAD Options", "DMPlex w/CAD"); PetscOptionsString("-filename", "The CAD/Geometry file", "ex18.c", options->filename, options->filename, PETSC_MAX_PATH_LEN, NULL); PetscOptionsEnd(); PetscFunctionReturn(0); } /* Main Function */ int main(int argc, char **argv) { // Define PETSc Variables DM dm, dmSurface; // Unstructured Grid SNES snes; // Nonlinear Solver Vec temp; // Solutions AppCtx ctx; // User-defined Work Context PetscInt dim; // DM Geometric Dimension PetscBool simplex; // Is DMPlex Simplex type? PetscFE fe; // PETSc Finite Element Object PetscDS ds; // PETSc Discretization System Object PetscMPIInt rank; // PETSc MPI Processor ID PetscFunctionBeginUser; PetscCall(PetscInitialize(&argc, &argv, NULL, help)); // Initialize PETSc PetscCall(ProcessOptions(PETSC_COMM_WORLD, &ctx)); // Process Options :: Here we get the filename from the command line options PetscCall(DMCreate(PETSC_COMM_WORLD, &dmSurface)); PetscCall(DMCreate(PETSC_COMM_WORLD, &dm)); PetscCall(DMSetType(dmSurface, DMPLEX)); PetscCall(DMSetType(dm, DMPLEX)); // Create DMPlex from CAD file PetscCall(MPI_Comm_rank(PETSC_COMM_WORLD, &rank)); // Get Rank of Current Processor if (!rank) { PetscCall(DMPlexCreateFromFile(PETSC_COMM_WORLD, ctx.filename, "EGADS", PETSC_TRUE, &dmSurface)); // Return dm created from CAD file } // Setup DMLabels for Boundary Conditions on DMPlex Vertices PetscInt nStart, nEnd, eStart, eEnd, faceID; DMLabel inletLabel, outletLabel; const PetscInt idA = 1, idB = 2; PetscCall(DMCreateLabel(dmSurface, "inlet")); PetscCall(DMCreateLabel(dmSurface, "outlet")); PetscCall(DMGetLabel(dmSurface, "inlet", &inletLabel)); PetscCall(DMGetLabel(dmSurface, "outlet", &outletLabel)); PetscCall(DMPlexGetDepthStratum(dmSurface, 0, &nStart, &nEnd)); PetscCall(DMPlexGetDepthStratum(dmSurface, 1, &eStart, &eEnd)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [nStart = %d || nEnd = %d] \n", nStart, nEnd)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [eStart = %d || eEnd = %d] \n", eStart, eEnd)); for (PetscInt ii = nStart; ii < nEnd; ++ii) { PetscCall(DMGetLabelValue(dmSurface, "EGADS Face ID", ii, &faceID)); if (faceID == 14) {PetscCall(DMSetLabelValue(dmSurface, "inlet", ii, faceID));} if (faceID == 7) {PetscCall(DMSetLabelValue(dmSurface, "outlet", ii, faceID));} } for (PetscInt ii = eStart; ii < eEnd; ++ii) { PetscCall(DMGetLabelValue(dmSurface, "EGADS Face ID", ii, &faceID)); if (faceID == 14) {PetscCall(DMSetLabelValue(dmSurface, "inlet", ii, faceID));} if (faceID == 7) {PetscCall(DMSetLabelValue(dmSurface, "outlet", ii, faceID));} } // Generate Volumetric Mesh PetscCall(DMPlexGenerate(dmSurface, "tetgen", PETSC_TRUE, &dm)); // Generate Volumetric Mesh PetscCall(DMDestroy(&dmSurface)); // Destroy DM dmSurface no longer needed PetscCall(DMSetApplicationContext(dm, &ctx)); // Link context to dm PetscCall(DMViewFromOptions(dm, NULL, "-dm_view")); // Write DM to file (hdf5) PetscCall(DMView(dm, PETSC_VIEWER_STDOUT_SELF)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [CREATED MESH] \n")); // Setup SNES and link to DM PetscCall(SNESCreate(PETSC_COMM_WORLD, &snes)); // Create SNES object PetscCall(SNESSetDM(snes, dm)); // Link SNES object to DM PetscCall(PetscPrintf(PETSC_COMM_SELF, " [SETUP SNES] \n")); // Setup Discretization PetscCall(DMGetDimension(dm, &dim)); // Get Geometric Dimension of the DM (2D or 3D) PetscCall(DMPlexIsSimplex(dm, &simplex)); // Check if DMPlex is Simplex Type PetscCall(PetscFECreateDefault(PetscObjectComm((PetscObject)dm), dim, 1, simplex, NULL, -1, &fe)); // Define and Create PetscFE Object PetscCall(DMAddField(dm, NULL, (PetscObject)fe)); // Add Field to Discretization object. In this case, the temperature field. PetscCall(DMCreateDS(dm)); // Create The Discretization System (DS) based on the field(s) added to the DM PetscCall(PetscPrintf(PETSC_COMM_SELF, " [SETUP DISCRETIZATION] \n")); // Setup Residuals & (Optional) Jacobian PetscInt testFieldID = 0; // This is an assumption. How do we find out which field ID goes with each added field? PetscCall(DMGetDS(dm, &ds)); // Get DS associated with DM PetscCall(PetscDSSetResidual(ds, testFieldID, (void (*))f0_function, (void (*))f1_function)); // Set Residual Function PetscCall(PetscDSSetJacobian(ds, testFieldID, testFieldID, NULL, NULL, NULL, (void (*))g3_uu_function)); // Set Jacobian Function PetscCall(PetscPrintf(PETSC_COMM_SELF, " [SETUP RESIDUAL & JACOBIAN] \n")); // Apply inlet face conditions - Dirichlet PetscCall(DMAddBoundary(dm, DM_BC_ESSENTIAL, "inTemp", inletLabel, 1, &idA, 0, 0, NULL, (void (*)(void))bc_inlet, NULL, &ctx, NULL)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [SETUP BOUNDARY CONDITION inTemp] \n")); PetscCall(DMAddBoundary(dm, DM_BC_ESSENTIAL, "outTemp", outletLabel, 1, &idB, 0, 0, NULL, (void (*)(void))bc_outlet, NULL, &ctx, NULL)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [SETUP BOUNDARY CONDITIONS] \n")); // Create Global Vector, Initialize Values & Name PetscCall(DMCreateGlobalVector(dm, &temp)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [AFTER GLOBAL VECTOR] \n")); PetscCall(VecSet(temp, 100.0)); PetscCall(VecView(temp, PETSC_VIEWER_STDOUT_SELF)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [AFTER temp set to 100] \n")); PetscCall(PetscObjectSetName((PetscObject)temp, "temperature")); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [AFTER SET OBJECT NAME] \n")); PetscCall(DMPlexSetSNESLocalFEM(dm, &ctx, &ctx, &ctx)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [AFTER SNESLocalFEM] \n")); //Set SNES from Options PetscCall(SNESSetFromOptions(snes)); PetscCall(SNESView(snes, PETSC_VIEWER_STDOUT_SELF)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [AFTER SNESSetFromOptions] \n")); // Solve System of Equations PetscCall(SNESSolve(snes, NULL, temp)); PetscCall(PetscPrintf(PETSC_COMM_SELF, " [SOLVED PROBLEM] \n")); //Get Solution and View PetscCall(SNESGetSolution(snes, &temp)); PetscCall(VecViewFromOptions(temp, NULL, "-sol_view")); // Cleanup PetscCall(VecDestroy(&temp)); PetscCall(SNESDestroy(&snes)); PetscCall(DMDestroy(&dm)); PetscCall(PetscFinalize()); return 0; } From coltonbryant2021 at u.northwestern.edu Wed Jun 7 11:43:11 2023 From: coltonbryant2021 at u.northwestern.edu (Colton Bryant) Date: Wed, 7 Jun 2023 11:43:11 -0500 Subject: [petsc-users] Interpolation Between DMSTAG Objects Message-ID: Hello, I am new to PETSc so apologies in advance if there is an easy answer to this question I've overlooked. I have a problem in which the computational domain is divided into two overlapping regions (overset grids). I would like to discretize each region as a separate DMSTAG object. What I do not understand is how to go about interpolating a vector from say DM1 onto nodes in DM2. My current (likely inefficient) idea is to create vectors of query points on DM2, share these vectors among all processes, perform the interpolations on DM1, and then insert the results into the vector on DM2. Before I embark on manually setting up the communication here I wanted to just ask if there is any native support for this kind of operation in PETSc I may be missing. Thanks in advance for any advice! Best, Colton Bryant -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Wed Jun 7 20:27:22 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Wed, 7 Jun 2023 18:27:22 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: <8735479bsg.fsf@jedbrown.org> References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> Message-ID: This has been a great discussion to follow. Regarding > when time stepping, you have enough mass matrix that cheaper preconditioners are good enough I'm curious what some algebraic recommendations might be for high Re in transients. I've found one-level DD to be ineffective when applied monolithically or to the momentum block of a split, as it scales with the mesh size. For high Re boomeramg is ineffective perhaps until https://gitlab.com/petsc/petsc/-/issues/1362 is resolved. I should try fiddling around again with Pierre's work in HPDDM, but curious if there are other PETSc PC recs, or if I need to overcome my inertia/laziness and move beyond command line options. On Mon, May 8, 2023 at 6:46?AM Jed Brown wrote: > Sebastian Blauth writes: > > > Hello everyone, > > > > I wanted to briefly follow up on my question (see my last reply). > > Does anyone know / have an idea why the LSC preconditioner in PETSc does > > not seem to scale well with the problem size (the outer fgmres solver I > > am using nearly scale nearly linearly with the problem size in my > example). > > The implementation was tested on heterogeneous Stokes problems from > geodynamics, and perhaps not on NS (or not with the discretization you're > using). > > https://doi.org/10.1016/j.pepi.2008.07.036 > > There is a comment about not having plumbing to provide a mass matrix. A > few lines earlier there is code using PetscObjectQuery, and that same > pattern could be applied for the mass matrix. If you're on a roughly > uniform mesh, including the mass scaling will probably have little effect, > but it could have a big impact in the presence of highly anistropic > elements or a broad range of scales. > > I don't think LSC has gotten a lot of use since everyone I know who tried > it has been sort of disappointed relative to other methods (e.g., inverse > viscosity scaled mass matrix for heterogeneous Stokes, PCD for moderate Re > Navier-Stokes). Of course there are no steady solutions to high Re so you > either have a turbulence model or are time stepping. I'm not aware of work > with LSC with turbulence models, and when time stepping, you have enough > mass matrix that cheaper preconditioners are good enough. That said, it > would be a great contribution to support this scaling. > > > I have also already tried using -ksp_diagonal_scale but the results are > > identical. > > That's expected, and likely to mess up some MG implementations so I > wouldn't recommend it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Jun 7 21:45:09 2023 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 7 Jun 2023 21:45:09 -0500 Subject: [petsc-users] Initializing kokkos before petsc causes a problem In-Reply-To: References: Message-ID: Hi, Philip, Thanks for reporting. I will have a look at the issue. --Junchao Zhang On Wed, Jun 7, 2023 at 9:30?AM Fackler, Philip via petsc-users < petsc-users at mcs.anl.gov> wrote: > I'm encountering a problem in xolotl. We initialize kokkos before > initializing petsc. Therefore... > > The pointer referenced here: > > https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/impls/basic/kokkos/sfkok.kokkos.cxx#L363 > > > from here: > https://gitlab.com/petsc/petsc/-/blob/main/include/petsc_kokkos.hpp > > remains null because the code to initialize it is skipped here: > > https://gitlab.com/petsc/petsc/-/blob/main/src/sys/objects/kokkos/kinit.kokkos.cxx#L28 > See line 71. > > Can this be modified to allow for kokkos to have been initialized by the > application before initializing petsc? > > Thank you for your help, > > > *Philip Fackler * > Research Software Engineer, Application Engineering Group > Advanced Computing Systems Research Section > Computer Science and Mathematics Division > *Oak Ridge National Laboratory* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Jun 7 22:01:49 2023 From: jed at jedbrown.org (Jed Brown) Date: Wed, 07 Jun 2023 21:01:49 -0600 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> Message-ID: <875y7ymzc2.fsf@jedbrown.org> Alexander Lindsay writes: > This has been a great discussion to follow. Regarding > >> when time stepping, you have enough mass matrix that cheaper preconditioners are good enough > > I'm curious what some algebraic recommendations might be for high Re in > transients. What mesh aspect ratio and streamline CFL number? Assuming your model is turbulent, can you say anything about momentum thickness Reynolds number Re_?? What is your wall normal spacing in plus units? (Wall resolved or wall modeled?) And to confirm, are you doing a nonlinearly implicit velocity-pressure solve? > I've found one-level DD to be ineffective when applied monolithically or to the momentum block of a split, as it scales with the mesh size. I wouldn't put too much weight on "scaling with mesh size" per se. You want an efficient solver for the coarsest mesh that delivers sufficient accuracy in your flow regime. Constants matter. Refining the mesh while holding time steps constant changes the advective CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful scaling study is to increase Reynolds number (e.g., by growing the domain) while keeping mesh size matched in terms of plus units in the viscous sublayer and Kolmogorov length in the outer boundary layer. That turns out to not be a very automatic study to do, but it's what matters and you can spend a lot of time chasing ghosts with naive scaling studies. From kalle.karhapaa at tuni.fi Thu Jun 8 00:30:32 2023 From: kalle.karhapaa at tuni.fi (=?utf-8?B?S2FsbGUgS2FyaGFww6TDpCAoVEFVKQ==?=) Date: Thu, 8 Jun 2023 05:30:32 +0000 Subject: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT In-Reply-To: References: Message-ID: Thanks Barry, make check works: Running check examples to verify correct installation Using PETSC_DIR=/home/vrkaka/SLlibs/petsc and PETSC_ARCH=arch-linux-c-opt C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes C/C++ example src/snes/tutorials/ex19 run successfully with mumps C/C++ example src/vec/vec/tests/ex47 run successfully with hdf5 Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process Running check examples to verify correct installation Using SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc, PETSC_DIR=/home/vrkaka/SLlibs/petsc, and PETSC_ARCH=arch-linux-c-opt C/C++ example src/eps/tests/test10 run successfully with 1 MPI process C/C++ example src/eps/tests/test10 run successfully with 2 MPI process Fortran example src/eps/tests/test7f run successfully with 1 MPI process Completed SLEPc test examples Completed PETSc test examples make getmpiexec gives: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec which is the mpiexec petsc built From: Barry Smith Sent: keskiviikko 7. kes?kuuta 2023 17.33 To: Kalle Karhap?? (TAU) Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT Does make check work in the PETSc directory? Is it possible the mpiexec in "mpiexec -np 2 ./simulations/default/default 1e2" is not the mpiexec built by PETSc? In the PETSc directory you can run make getmpiexec to see what mpiexec PETSc built. On Jun 7, 2023, at 6:07 AM, Kalle Karhap?? (TAU) > wrote: Hi! I am using petsc in a topology optimization project with sparselizard and ipopt. I am hoping to use mpich to run sparselizard/ipopt calculations faster, but I?m getting the following error straight away: vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ mpiexec -np 2 ./simulations/default/default 1e2 [proxy:0:0 at WKS-101259-LT] HYD_pmcd_pmi_parse_pmi_cmd (pm/pmiserv/common.c:57): [proxy:0:0 at WKS-101259-LT] handle_pmi_cmd (pm/pmiserv/pmip_cb.c:115): unable to parse PMI command [proxy:0:0 at WKS-101259-LT] pmi_cb (pm/pmiserv/pmip_cb.c:362): unable to handle PMI command [proxy:0:0 at WKS-101259-LT] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status [proxy:0:0 at WKS-101259-LT] main (pm/pmiserv/pmip.c:169): demux engine error waiting for event the problem persists with different numbers of cores -np 1?10. Sometimes after the previous message there is the bonus error: Fatal error in internal_Init: Other MPI error, error stack: internal_Init(66): MPI_Init(argc=(nil), argv=(nil)) failed internal_Init(46): Cannot call MPI_INIT or MPI_INIT_THREAD more than once In petsc configuration I am downloading mpich. Then I?m building the sparselizard project with the same mpich downloaded through petsc installation. here is my petsc conf: ./configure --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3'; petsc install went as follows: vrkaka at WKS-101259-LT:~/sparselizardipopt/install_external_libs$ ./install_petsc.sh mkdir: cannot create directory ?/home/vrkaka/SLlibs?: File exists __________________________________________ FETCHING THE LATEST PETSC VERSION FROM GIT Cloning into 'petsc'... remote: Enumerating objects: 1097079, done. remote: Counting objects: 100% (687/687), done. remote: Compressing objects: 100% (144/144), done. remote: Total 1097079 (delta 555), reused 664 (delta 539), pack-reused 1096392 Receiving objects: 100% (1097079/1097079), 344.72 MiB | 7.14 MiB/s, done. Resolving deltas: 100% (840415/840415), done. __________________________________________ CONFIGURING PETSC ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= ============================================================================================= Trying to download https://github.com/pmodels/mpich/releases/download/v4.1.1/mpich-4.1.1.tar.gz for MPICH ============================================================================================= ============================================================================================= Running configure on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Running make on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Running make install on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-sowing.git for SOWING ============================================================================================= ============================================================================================= Running configure on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running make on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running make install on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running arch-linux-c-opt/bin/bfort to generate Fortran stubs ============================================================================================= ============================================================================================= Trying to download http://www.zlib.net/zlib-1.2.13.tar.gz for ZLIB ============================================================================================= ============================================================================================= Building and installing zlib; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.2/src/hdf5-1.12.2.tar.bz2 for HDF5 ============================================================================================= ============================================================================================= Running configure on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Running make on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Running make install on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/parallel-netcdf/pnetcdf for PNETCDF ============================================================================================= ============================================================================================= Running libtoolize on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running autoreconf on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running configure on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make install on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/Unidata/netcdf-c/archive/v4.9.1.tar.gz for NETCDF ============================================================================================= ============================================================================================= Running configure on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make install on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-med.git for MED ============================================================================================= ============================================================================================= Configuring MED with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing MED; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/gsjaardema/seacas.git for EXODUSII ============================================================================================= ============================================================================================= Configuring EXODUSII with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing EXODUSII; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-metis.git for METIS ============================================================================================= ============================================================================================= Configuring METIS with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing METIS; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/xianyi/OpenBLAS.git for OPENBLAS ============================================================================================= ============================================================================================= Compiling OpenBLAS; this may take several minutes ============================================================================================= ============================================================================================= Installing OpenBLAS ============================================================================================= ============================================================================================= Trying to download https://github.com/Reference-ScaLAPACK/scalapack for SCALAPACK ============================================================================================= ============================================================================================= Configuring SCALAPACK with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing SCALAPACK; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://graal.ens-lyon.fr/MUMPS/MUMPS_5.6.0.tar.gz for MUMPS ============================================================================================= ============================================================================================= Compiling MUMPS; this may take several minutes ============================================================================================= ============================================================================================= Installing MUMPS; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://gitlab.com/slepc/slepc.git for SLEPC ============================================================================================= ============================================================================================= SLEPc examples are available at arch-linux-c-opt/externalpackages/git.slepc export SLEPC_DIR=arch-linux-c-opt ============================================================================================= Compilers: C Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 -fopenmp Version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 C++ Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -fopenmp Version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Fortran Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp Version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Linkers: Shared linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Dynamic linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Libraries linked against: BlasLapack: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads uses 4 byte integers MPI: Version: 4 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec Implementation: mpich4 MPICH_NUMVERSION: 40101300 MPICH: python: Executable: /usr/bin/python3 openmp: Version: 201511 pthread: cmake: Version: 3.22.1 Executable: /usr/bin/cmake openblas: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads zlib: Version: 1.2.13 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lz hdf5: Version: 1.12.2 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lhdf5_hl -lhdf5 netcdf: Version: 4.9.1 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lnetcdf pnetcdf: Version: 1.12.3 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lpnetcdf metis: Version: 5.1.0 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmetis slepc: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lslepc regex: MUMPS: Version: 5.6.0 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -ldmumps -lmumps_common -lpord -lpthread uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads scalapack: Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lscalapack exodusii: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lexoIIv2for32 -lexodus med: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmedC -lmed sowing: Version: 1.1.26 Executable: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/bfort PETSc: Language used to compile PETSc: C PETSC_ARCH: arch-linux-c-opt PETSC_DIR: /home/vrkaka/SLlibs/petsc Prefix: Scalar type: real Precision: double Support for __float128 Integer size: 4 bytes Single library: yes Shared libraries: yes Memory alignment from malloc(): 16 bytes Using GNU make: /usr/bin/gmake xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt all xxx=========================================================================xxx __________________________________________ COMPILING PETSC /usr/bin/python3 ./config/gmakegen.py --petsc-arch=arch-linux-c-opt /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegentest.py --petsc-dir=/home/vrkaka/SLlibs/petsc --petsc-arch=arch-linux-c-opt --testdir=./arch-linux-c-opt/tests make: '/home/vrkaka/SLlibs/petsc' is up to date. make: 'arch-linux-c-opt' is up to date. /home/vrkaka/SLlibs/petsc/lib/petsc/bin/petscnagupgrade.py:14: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.version import LooseVersion as Version ========================================== See documentation/faq.html and documentation/bugreporting.html for help with installation problems. Please send EVERYTHING printed out below when reporting problems. Please check the mailing list archives and consider subscribing. https://petsc.org/release/community/mailing/ ========================================== Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:19:10 +0300 Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using PETSc directory: /home/vrkaka/SLlibs/petsc Using PETSc arch: arch-linux-c-opt ----------------------------------------- PETSC_VERSION_RELEASE 0 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 19 PETSC_VERSION_SUBMINOR 2 PETSC_VERSION_DATE "unknown" PETSC_VERSION_GIT "unknown" PETSC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using configure Options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 Using configuration flags: #define PETSC_ARCH "arch-linux-c-opt" #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define PETSC_DIR_SEPARATOR '/' #define PETSC_FORTRAN_CHARLEN_T size_t #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_ATTRIBUTEALIGNED 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_ATOMIC 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_CXX_DIALECT_CXX20 1 #define PETSC_HAVE_DLADDR 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_EXECUTABLE_EXPORT 1 #define PETSC_HAVE_EXODUSII 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_HDF5 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MED 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_METIS 1 #define PETSC_HAVE_MKSTEMP 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_MPICH 1 #define PETSC_HAVE_MPICH_NUMVERSION 40101300 #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_LARGE_COUNT 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_MUMPS 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_NETCDF 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_OPENBLAS 1 #define PETSC_HAVE_OPENMP 1 #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" #define PETSC_HAVE_PNETCDF 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_POSIX_MEMALIGN 1 #define PETSC_HAVE_PTHREAD 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_HAVE_REGEX 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_SCALAPACK 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SLEPC 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SOWING 1 #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_STDATOMIC_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HDF5_HAVE_PARALLEL 1 #define PETSC_HDF5_HAVE_ZLIB 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "so" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED __attribute((unused)) #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MALLOC_COALESCED 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC_VERSION_BRANCH_GIT "main" #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 ----------------------------------------- Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 mpicc -show: gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi C compiler version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp mpicxx -show: g++ -Wno-lto-type-mismatch -Wno-psabi -O3 -std=gnu++20 -fPIC -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpicxx -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi C++ compiler version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp mpif90 -show: gfortran -fPIC -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -O3 -fallow-argument-mismatch -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpifort -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi Fortran compiler version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 ----------------------------------------- Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 ----------------------------------------- Using system modules: Using mpi.h: # 1 "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include/mpi.h" 1 ----------------------------------------- Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ ------------------------------------------ Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec ------------------------------------------ Using MAKE: /usr/bin/gmake Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc ========================================== /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= libs FC arch-linux-c-opt/obj/sys/fsrc/somefort.o CXX arch-linux-c-opt/obj/sys/dll/cxx/demangle.o FC arch-linux-c-opt/obj/sys/f90-src/fsrc/f90_fwrap.o CC arch-linux-c-opt/obj/sys/f90-custom/zsysf90.o FC arch-linux-c-opt/obj/sys/f90-mod/petscsysmod.o CC arch-linux-c-opt/obj/sys/dll/dlimpl.o CC arch-linux-c-opt/obj/sys/dll/dl.o CC arch-linux-c-opt/obj/sys/dll/ftn-auto/regf.o CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostcontext.o CC arch-linux-c-opt/obj/sys/ftn-custom/zsys.o CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostdevice.o CC arch-linux-c-opt/obj/sys/ftn-custom/zutils.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/global_dcontext.o CC arch-linux-c-opt/obj/sys/dll/reg.o CC arch-linux-c-opt/obj/sys/logging/xmlviewer.o CC arch-linux-c-opt/obj/sys/logging/utils/stack.o CC arch-linux-c-opt/obj/sys/logging/utils/classlog.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/device.o CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zpetscloghf.o CC arch-linux-c-opt/obj/sys/logging/utils/stagelog.o CC arch-linux-c-opt/obj/sys/logging/ftn-auto/xmllogeventf.o CC arch-linux-c-opt/obj/sys/logging/ftn-auto/plogf.o CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zplogf.o CC arch-linux-c-opt/obj/sys/logging/utils/eventlog.o CC arch-linux-c-opt/obj/sys/python/ftn-custom/zpythonf.o CC arch-linux-c-opt/obj/sys/utils/arch.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/memory.o CC arch-linux-c-opt/obj/sys/python/pythonsys.o CC arch-linux-c-opt/obj/sys/utils/fhost.o CC arch-linux-c-opt/obj/sys/utils/fuser.o CC arch-linux-c-opt/obj/sys/utils/matheq.o CC arch-linux-c-opt/obj/sys/utils/mathclose.o CC arch-linux-c-opt/obj/sys/utils/mathfit.o CC arch-linux-c-opt/obj/sys/utils/mathinf.o CC arch-linux-c-opt/obj/sys/utils/ctable.o CC arch-linux-c-opt/obj/sys/utils/memc.o CC arch-linux-c-opt/obj/sys/utils/mpilong.o CC arch-linux-c-opt/obj/sys/logging/xmllogevent.o CC arch-linux-c-opt/obj/sys/utils/mpitr.o CC arch-linux-c-opt/obj/sys/utils/mpishm.o CC arch-linux-c-opt/obj/sys/utils/pbarrier.o CC arch-linux-c-opt/obj/sys/utils/mpiu.o CC arch-linux-c-opt/obj/sys/utils/psleep.o CC arch-linux-c-opt/obj/sys/utils/pdisplay.o CC arch-linux-c-opt/obj/sys/utils/psplit.o CC arch-linux-c-opt/obj/sys/utils/segbuffer.o CC arch-linux-c-opt/obj/sys/utils/mpimesg.o CC arch-linux-c-opt/obj/sys/utils/sortd.o CC arch-linux-c-opt/obj/sys/utils/sseenabled.o CC arch-linux-c-opt/obj/sys/utils/sortip.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zarchf.o CC arch-linux-c-opt/obj/sys/utils/mpits.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zfhostf.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zsortsof.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zstrf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/memcf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpitsf.o CC arch-linux-c-opt/obj/sys/logging/plog.o CC arch-linux-c-opt/obj/sys/utils/str.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpiuf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psleepf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psplitf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortdf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortipf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortsof.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortif.o CC arch-linux-c-opt/obj/sys/totalview/tv_data_display.o CC arch-linux-c-opt/obj/sys/objects/gcomm.o CC arch-linux-c-opt/obj/sys/objects/gcookie.o CC arch-linux-c-opt/obj/sys/objects/fcallback.o CC arch-linux-c-opt/obj/sys/objects/destroy.o CC arch-linux-c-opt/obj/sys/objects/gtype.o CC arch-linux-c-opt/obj/sys/utils/sorti.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/dcontext.o CC arch-linux-c-opt/obj/sys/objects/olist.o CC arch-linux-c-opt/obj/sys/objects/garbage.o CC arch-linux-c-opt/obj/sys/objects/pgname.o CC arch-linux-c-opt/obj/sys/objects/package.o CC arch-linux-c-opt/obj/sys/objects/inherit.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/mark_dcontext.o CC arch-linux-c-opt/obj/sys/utils/sortso.o CC arch-linux-c-opt/obj/sys/objects/aoptions.o CC arch-linux-c-opt/obj/sys/objects/prefix.o CC arch-linux-c-opt/obj/sys/objects/init.o CC arch-linux-c-opt/obj/sys/objects/pname.o CC arch-linux-c-opt/obj/sys/objects/ptype.o CC arch-linux-c-opt/obj/sys/objects/state.o CC arch-linux-c-opt/obj/sys/objects/version.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/destroyf.o CC arch-linux-c-opt/obj/sys/objects/device/util/memory.o CC arch-linux-c-opt/obj/sys/objects/device/util/devicereg.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcommf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcookief.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/inheritf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/optionsf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/pinitf.o CC arch-linux-c-opt/obj/sys/objects/tagm.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/statef.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/subcommf.o CC arch-linux-c-opt/obj/sys/objects/subcomm.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/tagmf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgcommf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zdestroyf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgtype.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zinheritf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsyamlf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpackage.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpgnamef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpnamef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zprefixf.o CC arch-linux-c-opt/obj/sys/objects/pinit.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zptypef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstartf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zversionf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstart.o CC arch-linux-c-opt/obj/sys/memory/mhbw.o CC arch-linux-c-opt/obj/sys/memory/mem.o CC arch-linux-c-opt/obj/sys/memory/ftn-auto/memf.o CC arch-linux-c-opt/obj/sys/memory/ftn-custom/zmtrf.o CC arch-linux-c-opt/obj/sys/memory/mal.o CC arch-linux-c-opt/obj/sys/memory/ftn-auto/mtrf.o CC arch-linux-c-opt/obj/sys/perfstubs/pstimer.o CC arch-linux-c-opt/obj/sys/error/errabort.o CC arch-linux-c-opt/obj/sys/error/checkptr.o CC arch-linux-c-opt/obj/sys/error/errstop.o CC arch-linux-c-opt/obj/sys/error/pstack.o CC arch-linux-c-opt/obj/sys/error/adebug.o CC arch-linux-c-opt/obj/sys/error/errtrace.o CC arch-linux-c-opt/obj/sys/error/fp.o CC arch-linux-c-opt/obj/sys/memory/mtr.o CC arch-linux-c-opt/obj/sys/error/signal.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/adebugf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/checkptrf.o CC arch-linux-c-opt/obj/sys/objects/options.o CC arch-linux-c-opt/obj/sys/error/ftn-custom/zerrf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/errf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/fpf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/signalf.o CC arch-linux-c-opt/obj/sys/error/err.o CC arch-linux-c-opt/obj/sys/fileio/fpath.o CC arch-linux-c-opt/obj/sys/fileio/fdir.o CC arch-linux-c-opt/obj/sys/fileio/fwd.o CC arch-linux-c-opt/obj/sys/fileio/ghome.o CC arch-linux-c-opt/obj/sys/fileio/ftest.o CC arch-linux-c-opt/obj/sys/fileio/grpath.o CC arch-linux-c-opt/obj/sys/fileio/rpath.o CC arch-linux-c-opt/obj/sys/fileio/mpiuopen.o CC arch-linux-c-opt/obj/sys/fileio/smatlab.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmpiuopenf.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zghomef.o CC arch-linux-c-opt/obj/sys/fileio/fretrieve.o CC arch-linux-c-opt/obj/sys/fileio/ftn-auto/sysiof.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmprintf.o CC arch-linux-c-opt/obj/sys/info/ftn-auto/verboseinfof.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zsysiof.o CC arch-linux-c-opt/obj/sys/info/ftn-custom/zverboseinfof.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/axis.o CC arch-linux-c-opt/obj/sys/fileio/mprint.o CC arch-linux-c-opt/obj/sys/info/verboseinfo.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/bars.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/cmap.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/image.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/axisc.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/dscatter.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/lg.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/zoom.o CC arch-linux-c-opt/obj/sys/fileio/sysio.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zlgcf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/hists.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zzoomf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zaxisf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/axiscf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/barsf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/lgc.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/dscatterf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/histsf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dcoor.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dclear.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgcf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dellipse.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dflush.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpause.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dline.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmarker.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmouse.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpoint.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawregall.o CC arch-linux-c-opt/obj/sys/objects/optionsyaml.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drect.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawreg.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/draw.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtext.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawregf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtextf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dsave.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtrif.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtri.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dclearf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dcoorf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dviewp.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dellipsef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dflushf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmousef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmarkerf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dlinef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpausef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpointf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawregf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drectf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dsavef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtextf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtrif.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dviewpf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/ftn-auto/drawnullf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/drawnull.o CC arch-linux-c-opt/obj/sys/classes/random/interface/dlregisrand.o CC arch-linux-c-opt/obj/sys/classes/random/interface/random.o CC arch-linux-c-opt/obj/sys/classes/random/interface/randreg.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomcf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/tikz/tikz.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-custom/zrandomf.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomf.o CC arch-linux-c-opt/obj/sys/classes/random/interface/randomc.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rand48/rand48.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rand/rand.o CC arch-linux-c-opt/obj/sys/classes/bag/ftn-auto/bagf.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rander48/rander48.o CC arch-linux-c-opt/obj/sys/classes/bag/ftn-custom/zbagf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dupl.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/flush.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dlregispetsc.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewa.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewers.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewasetf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewregall.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/view.o CC arch-linux-c-opt/obj/sys/classes/bag/f90-custom/zbagf90.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewaf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/image/drawimage.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewregf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/ftn-auto/glvisf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-auto/drawvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-custom/zdrawvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-custom/zbinvf.o CC arch-linux-c-opt/obj/sys/classes/bag/bag.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-auto/binvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/f90-custom/zbinvf90.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewreg.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/ftn-custom/zsendf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-auto/hdf5vf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/ftn-custom/zstringvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/stringv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-custom/zhdf5f.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/drawv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/send.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/ftn-custom/zvtkvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/glvis.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vu/petscvu.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/vtkv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zvcreatef.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/filevf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/vcreateaf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/vcreatea.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zfilevf.o CC arch-linux-c-opt/obj/sys/time/cputime.o CC arch-linux-c-opt/obj/sys/time/fdate.o CC arch-linux-c-opt/obj/sys/time/ftn-auto/cputimef.o CC arch-linux-c-opt/obj/sys/time/ftn-custom/zptimef.o CC arch-linux-c-opt/obj/sys/f90-src/f90_cwrap.o CC arch-linux-c-opt/obj/vec/pf/interface/pfall.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/hdf5v.o CC arch-linux-c-opt/obj/vec/pf/interface/ftn-custom/zpff.o CC arch-linux-c-opt/obj/vec/pf/interface/ftn-auto/pff.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/binv.o CC arch-linux-c-opt/obj/vec/pf/impls/constant/const.o CC arch-linux-c-opt/obj/vec/pf/interface/pf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/filev.o CC arch-linux-c-opt/obj/vec/pf/impls/string/cstring.o CC arch-linux-c-opt/obj/vec/is/utils/isio.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zhdf5io.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zisltogf.o CC arch-linux-c-opt/obj/vec/is/utils/pmap.o CC arch-linux-c-opt/obj/vec/is/utils/hdf5io.o CC arch-linux-c-opt/obj/vec/is/utils/f90-custom/zisltogf90.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zvsectionisf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/isltogf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/pmapf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/psortf.o CC arch-linux-c-opt/obj/vec/is/is/utils/f90-custom/ziscoloringf90.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-custom/ziscoloringf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isblockf.o CC arch-linux-c-opt/obj/vec/is/is/utils/iscomp.o CC arch-linux-c-opt/obj/vec/is/utils/psort.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscompf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscoloringf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isdifff.o CC arch-linux-c-opt/obj/vec/is/is/utils/isblock.o CC arch-linux-c-opt/obj/vec/is/is/interface/isreg.o CC arch-linux-c-opt/obj/vec/is/is/interface/isregall.o CC arch-linux-c-opt/obj/vec/is/is/interface/f90-custom/zindexf90.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/indexf.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-custom/zindexf.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/isregf.o CC arch-linux-c-opt/obj/vec/is/is/impls/stride/ftn-auto/stridef.o CC arch-linux-c-opt/obj/vec/is/is/utils/isdiff.o CC arch-linux-c-opt/obj/vec/is/is/utils/iscoloring.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-custom/zblockf.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-auto/blockf.o FC arch-linux-c-opt/obj/vec/f90-mod/petscvecmod.o CC arch-linux-c-opt/obj/vec/is/is/impls/f90-custom/zblockf90.o CC arch-linux-c-opt/obj/vec/is/is/impls/stride/stride.o CC arch-linux-c-opt/obj/vec/is/is/impls/general/ftn-auto/generalf.o CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-custom/zsectionf.o CC arch-linux-c-opt/obj/vec/is/section/interface/f90-custom/zvsectionisf90.o CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-auto/sectionf.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/block.o CC arch-linux-c-opt/obj/vec/is/ao/interface/aoreg.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ao.o CC arch-linux-c-opt/obj/vec/is/ao/interface/aoregall.o CC arch-linux-c-opt/obj/vec/is/ao/interface/dlregisdm.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-auto/aof.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-custom/zaof.o CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/ftn-custom/zaobasicf.o CC arch-linux-c-opt/obj/vec/is/section/interface/sectionhdf5.o CC arch-linux-c-opt/obj/vec/is/is/impls/general/general.o CC arch-linux-c-opt/obj/vec/is/utils/isltog.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-auto/aomappingf.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-custom/zaomappingf.o CC arch-linux-c-opt/obj/vec/is/is/interface/index.o CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/aobasic.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-custom/zsfutilsf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfcoordf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/f90-custom/zsfutilsf90.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/aomapping.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfutilsf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/sfcoord.o CC arch-linux-c-opt/obj/vec/is/sf/interface/dlregissf.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sfregi.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zsf.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zvscat.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sftype.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/sff.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/vscatf.o CC arch-linux-c-opt/obj/vec/is/ao/impls/memscalable/aomemscalable.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gather/sfgather.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gatherv/sfgatherv.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfmpi.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/alltoall/sfalltoall.o CC arch-linux-c-opt/obj/vec/is/sf/utils/sfutils.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgather/sfallgather.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfbasic.o CC arch-linux-c-opt/obj/vec/is/sf/interface/vscat.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/neighbor/sfneighbor.o CC arch-linux-c-opt/obj/vec/vec/utils/vecglvis.o CC arch-linux-c-opt/obj/vec/is/section/interface/section.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgatherv/sfallgatherv.o CC arch-linux-c-opt/obj/vec/vec/utils/vecio.o CC arch-linux-c-opt/obj/vec/vec/utils/vecs.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/dlregistagger.o CC arch-linux-c-opt/obj/vec/vec/utils/comb.o CC arch-linux-c-opt/obj/vec/is/sf/impls/window/sfwindow.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/tagger.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/taggerregi.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/ftn-auto/taggerf.o CC arch-linux-c-opt/obj/vec/vec/utils/vsection.o CC arch-linux-c-opt/obj/vec/vec/utils/projection.o CC arch-linux-c-opt/obj/vec/vec/utils/vecstash.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/absolute.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sf.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/and.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/andor.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/or.o CC arch-linux-c-opt/obj/vec/vec/utils/f90-custom/zvsectionf90.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/relative.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/simple.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/combf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/projectionf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/veciof.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vsectionf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vinvf.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/cdf.o CC arch-linux-c-opt/obj/vec/vec/interface/veccreate.o CC arch-linux-c-opt/obj/vec/vec/interface/vecregall.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvecregf.o CC arch-linux-c-opt/obj/vec/vec/interface/dlregisvec.o CC arch-linux-c-opt/obj/vec/vec/interface/vecreg.o CC arch-linux-c-opt/obj/vec/vec/interface/f90-custom/zvectorf90.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/veccreatef.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/rvectorf.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/vectorf.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvectorf.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec3.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec1.o CC arch-linux-c-opt/obj/vec/vec/utils/vinv.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/vseqcr.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-custom/zbvec2f.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-auto/vseqcrf.o CC arch-linux-c-opt/obj/vec/vec/impls/shared/ftn-auto/shvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/shared/shvec.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-custom/zvecnestf.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-auto/vecnestf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/commonmpvec.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/dvec2.o CC arch-linux-c-opt/obj/vec/vec/interface/vector.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/vmpicr.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pvec2.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec2.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-custom/zpbvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/commonmpvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/vmpicrf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/pbvecf.o CC arch-linux-c-opt/obj/mat/coarsen/scoarsen.o CC arch-linux-c-opt/obj/mat/coarsen/ftn-auto/coarsenf.o CC arch-linux-c-opt/obj/mat/coarsen/ftn-custom/zcoarsenf.o CC arch-linux-c-opt/obj/vec/vec/interface/rvector.o CC arch-linux-c-opt/obj/mat/coarsen/coarsen.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pbvec.o CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/ftn-auto/miskf.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/vecnest.o CC arch-linux-c-opt/obj/mat/color/utils/bipartite.o FC arch-linux-c-opt/obj/mat/f90-mod/petscmatmod.o CC arch-linux-c-opt/obj/mat/color/utils/valid.o CC arch-linux-c-opt/obj/mat/coarsen/impls/mis/mis.o CC arch-linux-c-opt/obj/mat/color/interface/matcoloring.o CC arch-linux-c-opt/obj/mat/color/interface/matcoloringregi.o CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/misk.o CC arch-linux-c-opt/obj/mat/color/interface/ftn-custom/zmatcoloringf.o CC arch-linux-c-opt/obj/mat/color/interface/ftn-auto/matcoloringf.o CC arch-linux-c-opt/obj/mat/color/utils/weights.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/degr.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/numsrt.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/dsm.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pdvec.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/ido.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/seq.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/setr.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/slo.o CC arch-linux-c-opt/obj/mat/color/impls/power/power.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/color.o CC arch-linux-c-opt/obj/mat/color/impls/natural/natural.o CC arch-linux-c-opt/obj/mat/utils/bandwidth.o CC arch-linux-c-opt/obj/mat/utils/compressedrow.o CC arch-linux-c-opt/obj/mat/utils/convert.o CC arch-linux-c-opt/obj/mat/utils/freespace.o CC arch-linux-c-opt/obj/mat/coarsen/impls/hem/hem.o CC arch-linux-c-opt/obj/mat/utils/getcolv.o CC arch-linux-c-opt/obj/mat/utils/matio.o CC arch-linux-c-opt/obj/mat/utils/matstashspace.o CC arch-linux-c-opt/obj/mat/utils/axpy.o CC arch-linux-c-opt/obj/mat/color/impls/jp/jp.o CC arch-linux-c-opt/obj/mat/utils/pheap.o CC arch-linux-c-opt/obj/mat/utils/gcreate.o CC arch-linux-c-opt/obj/mat/utils/veccreatematdense.o CC arch-linux-c-opt/obj/mat/utils/overlapsplit.o CC arch-linux-c-opt/obj/mat/utils/zerodiag.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/axpyf.o CC arch-linux-c-opt/obj/mat/utils/multequal.o CC arch-linux-c-opt/obj/mat/utils/zerorows.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/bandwidthf.o CC arch-linux-c-opt/obj/mat/color/impls/greedy/greedy.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/gcreatef.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/getcolvf.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/multequalf.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/zerodiagf.o CC arch-linux-c-opt/obj/mat/order/degree.o CC arch-linux-c-opt/obj/mat/order/fn1wd.o CC arch-linux-c-opt/obj/mat/order/fndsep.o CC arch-linux-c-opt/obj/mat/order/fnroot.o CC arch-linux-c-opt/obj/mat/order/gen1wd.o CC arch-linux-c-opt/obj/mat/order/gennd.o CC arch-linux-c-opt/obj/mat/order/genrcm.o CC arch-linux-c-opt/obj/mat/order/genqmd.o CC arch-linux-c-opt/obj/mat/order/qmdqt.o CC arch-linux-c-opt/obj/mat/order/qmdmrg.o CC arch-linux-c-opt/obj/mat/order/qmdrch.o CC arch-linux-c-opt/obj/mat/utils/matstash.o CC arch-linux-c-opt/obj/mat/order/qmdupd.o CC arch-linux-c-opt/obj/mat/order/rcm.o CC arch-linux-c-opt/obj/mat/order/rootls.o CC arch-linux-c-opt/obj/mat/order/sp1wd.o CC arch-linux-c-opt/obj/mat/order/spnd.o CC arch-linux-c-opt/obj/mat/order/spqmd.o CC arch-linux-c-opt/obj/mat/order/sprcm.o CC arch-linux-c-opt/obj/mat/order/wbm.o CC arch-linux-c-opt/obj/mat/order/sregis.o CC arch-linux-c-opt/obj/mat/order/ftn-custom/zsorderf.o CC arch-linux-c-opt/obj/mat/order/sorder.o CC arch-linux-c-opt/obj/mat/order/ftn-auto/spectralf.o CC arch-linux-c-opt/obj/mat/order/spectral.o CC arch-linux-c-opt/obj/mat/order/metisnd/metisnd.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatnullf.o CC arch-linux-c-opt/obj/mat/interface/matregis.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatregf.o CC arch-linux-c-opt/obj/mat/interface/matreg.o CC arch-linux-c-opt/obj/mat/interface/matnull.o CC arch-linux-c-opt/obj/mat/interface/dlregismat.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matnullf.o CC arch-linux-c-opt/obj/mat/interface/f90-custom/zmatrixf90.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matproductf.o CC arch-linux-c-opt/obj/mat/ftn-custom/zmat.o CC arch-linux-c-opt/obj/mat/matfd/ftn-custom/zfdmatrixf.o CC arch-linux-c-opt/obj/mat/matfd/ftn-auto/fdmatrixf.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matrixf.o CC arch-linux-c-opt/obj/mat/interface/matproduct.o CC arch-linux-c-opt/obj/mat/impls/transpose/transm.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatrixf.o CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/htransmf.o CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/transmf.o CC arch-linux-c-opt/obj/mat/impls/transpose/htransm.o CC arch-linux-c-opt/obj/mat/matfd/fdmatrix.o CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmf.o CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmhf.o CC arch-linux-c-opt/obj/mat/impls/python/ftn-custom/zpythonmf.o CC arch-linux-c-opt/obj/mat/impls/python/pythonmat.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/fdsell.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/ftn-custom/zsellf.o CC arch-linux-c-opt/obj/mat/impls/normal/normmh.o CC arch-linux-c-opt/obj/mat/impls/normal/normm.o CC arch-linux-c-opt/obj/mat/impls/is/ftn-auto/matisf.o CC arch-linux-c-opt/obj/mat/impls/shell/ftn-auto/shellf.o CC arch-linux-c-opt/obj/mat/impls/shell/ftn-custom/zshellf.o CC arch-linux-c-opt/obj/mat/impls/shell/shellcnv.o CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mmsell.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/aijsbaij.o CC arch-linux-c-opt/obj/mat/impls/shell/shell.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/sell.o CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mpisell.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact10.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact3.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact11.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact12.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij2.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact4.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact5.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact6.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact7.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-custom/zsbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sro.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact8.o CC arch-linux-c-opt/obj/mat/impls/is/matis.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-auto/sbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-custom/zmpisbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact9.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpiaijsbaij.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-auto/mpisbaijf.o CC arch-linux-c-opt/obj/mat/impls/kaij/ftn-auto/kaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij.o CC arch-linux-c-opt/obj/mat/interface/matrix.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-custom/zmpiadjf.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-auto/mpiadjf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mmsbaij.o CC arch-linux-c-opt/obj/mat/impls/diagonal/ftn-auto/diagonalf.o CC arch-linux-c-opt/obj/mat/impls/scalapack/ftn-auto/matscalapackf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/sbaijov.o CC arch-linux-c-opt/obj/mat/impls/lrc/ftn-auto/lrcf.o CC arch-linux-c-opt/obj/mat/impls/diagonal/diagonal.o CC arch-linux-c-opt/obj/mat/impls/lrc/lrc.o CC arch-linux-c-opt/obj/mat/impls/fft/ftn-custom/zfftf.o CC arch-linux-c-opt/obj/mat/impls/fft/fft.o CC arch-linux-c-opt/obj/mat/impls/dummy/matdummy.o CC arch-linux-c-opt/obj/mat/impls/submat/ftn-auto/submatf.o CC arch-linux-c-opt/obj/mat/impls/cdiagonal/ftn-auto/cdiagonalf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact2.o CC arch-linux-c-opt/obj/mat/impls/submat/submat.o CC arch-linux-c-opt/obj/mat/impls/cdiagonal/cdiagonal.o CC arch-linux-c-opt/obj/mat/impls/maij/ftn-auto/maijf.o CC arch-linux-c-opt/obj/mat/impls/composite/ftn-auto/mcompositef.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/mpiadj.o CC arch-linux-c-opt/obj/mat/impls/nest/ftn-custom/zmatnestf.o CC arch-linux-c-opt/obj/mat/impls/nest/ftn-auto/matnestf.o CC arch-linux-c-opt/obj/mat/impls/kaij/kaij.o CC arch-linux-c-opt/obj/mat/impls/composite/mcomposite.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijhdf5.o CC arch-linux-c-opt/obj/mat/impls/scalapack/matscalapack.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode2.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/fdaij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matptap.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matrart.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/mattransposematmult.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpisbaij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/symtranspose.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-custom/zaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-auto/aijf.o CC arch-linux-c-opt/obj/mat/impls/nest/matnest.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/basfactor.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijsell/aijsell.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/crl/crl.o CC arch-linux-c-opt/obj/mat/impls/maij/maij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijfact.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijperm/aijperm.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpb_aij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaijpc.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/spbas.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimattransposematmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mmaij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/fdmpiaij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/ftn-auto/mumpsf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijsell/mpiaijsell.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-auto/mpiaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijperm/mpiaijperm.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-custom/zmpiaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/crl/mcrl.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-custom/zdensef.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/densehdf5.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-auto/densef.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aij.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mmdense.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-custom/zmpidensef.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-auto/mpidensef.o CC arch-linux-c-opt/obj/mat/impls/preallocator/ftn-auto/matpreallocatorf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmult.o CC arch-linux-c-opt/obj/mat/impls/preallocator/matpreallocator.o CC arch-linux-c-opt/obj/mat/impls/mffd/mffd.o CC arch-linux-c-opt/obj/mat/impls/mffd/mfregis.o CC arch-linux-c-opt/obj/mat/impls/mffd/mffddef.o CC arch-linux-c-opt/obj/mat/impls/mffd/wp.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffddeff.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-custom/zmffdf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/mumps.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mpidense.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/wpf.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffdf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/aijbaij.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/dense.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiptap.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact11.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiov.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact13.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact81.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat11.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact9.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat14.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolv.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat15.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrann.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgedi.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa4.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaij.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-custom/zbaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-auto/baijf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa7.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-auto/mpibaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-custom/zmpibaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpiaijbaij.o CC arch-linux-c-opt/obj/mat/impls/scatter/mscatter.o CC arch-linux-c-opt/obj/mat/impls/scatter/ftn-auto/mscatterf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpb_baij.o CC arch-linux-c-opt/obj/mat/impls/localref/ftn-auto/mlocalreff.o CC arch-linux-c-opt/obj/mat/impls/centering/ftn-auto/centeringf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij.o CC arch-linux-c-opt/obj/mat/impls/centering/centering.o CC arch-linux-c-opt/obj/mat/impls/localref/mlocalref.o CC arch-linux-c-opt/obj/mat/partition/spartition.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mmbaij.o CC arch-linux-c-opt/obj/mat/partition/ftn-auto/partitionf.o CC arch-linux-c-opt/obj/mat/partition/ftn-custom/zpartitionf.o CC arch-linux-c-opt/obj/dm/dt/space/interface/ftn-auto/spacef.o CC arch-linux-c-opt/obj/mat/partition/partition.o CC arch-linux-c-opt/obj/dm/dt/space/interface/space.o CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/ftn-auto/spaceptrimmedf.o CC arch-linux-c-opt/obj/mat/partition/impls/hierarchical/hierarchical.o CC arch-linux-c-opt/obj/dm/dt/space/impls/point/ftn-auto/spacepointf.o CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/spaceptrimmed.o CC arch-linux-c-opt/obj/dm/dt/space/impls/point/spacepoint.o CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/ftn-auto/spacetensorf.o CC arch-linux-c-opt/obj/mat/impls/blockmat/seq/blockmat.o CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/ftn-auto/spacesumf.o CC arch-linux-c-opt/obj/dm/dt/space/impls/wxy/spacewxy.o CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/ftn-auto/spacesubspacef.o CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/ftn-auto/spacepolyf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/feceed.o CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/spacesum.o CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/spacepoly.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmmod.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-custom/zfef.o CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/spacetensor.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fegeomf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fef.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/baijov.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/fegeom.o CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/spacesubspace.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/fvceed.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-auto/fvf.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-custom/zfvf.o CC arch-linux-c-opt/obj/dm/dt/fe/impls/composite/fecomposite.o CC arch-linux-c-opt/obj/dm/dt/interface/dtprob.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdsf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/fe.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/fv.o CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtdsf90.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtfef.o CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtf90.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtaltvf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtdsf.o CC arch-linux-c-opt/obj/dm/dt/fe/impls/basic/febasic.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtprobf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtweakformf.o CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/ftn-auto/dualspacef.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/ftn-auto/dualspacerefinedf.o CC arch-linux-c-opt/obj/dm/dt/interface/dtweakform.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/dualspacerefined.o CC arch-linux-c-opt/obj/dm/dt/interface/dtaltv.o CC arch-linux-c-opt/obj/dm/dt/interface/dtds.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/ftn-auto/dspacelagrangef.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/ftn-auto/dspacesimplef.o CC arch-linux-c-opt/obj/dm/label/ftn-custom/zdmlabel.o CC arch-linux-c-opt/obj/dm/label/ftn-auto/dmlabelf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpibaij.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/dspacesimple.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/dmlabelephplex.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/ftn-auto/dmlabelephplexf.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/ftn-auto/dmlabelephf.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/dmlabeleph.o CC arch-linux-c-opt/obj/dm/interface/dmceed.o CC arch-linux-c-opt/obj/dm/interface/dlregisdmdm.o CC arch-linux-c-opt/obj/dm/interface/dmgenerate.o CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/dualspace.o CC arch-linux-c-opt/obj/dm/interface/dmget.o CC arch-linux-c-opt/obj/dm/interface/dmglvis.o CC arch-linux-c-opt/obj/dm/interface/dmcoordinates.o CC arch-linux-c-opt/obj/dm/dt/interface/dt.o CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmgetf.o CC arch-linux-c-opt/obj/dm/interface/dmregall.o CC arch-linux-c-opt/obj/dm/interface/dmperiodicity.o CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmcoordinatesf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmgetf.o CC arch-linux-c-opt/obj/dm/interface/dmi.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmperiodicityf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmf.o CC arch-linux-c-opt/obj/dm/field/interface/dlregisdmfield.o CC arch-linux-c-opt/obj/dm/field/interface/dmfieldregi.o CC arch-linux-c-opt/obj/dm/field/interface/ftn-auto/dmfieldf.o CC arch-linux-c-opt/obj/dm/field/interface/dmfield.o CC arch-linux-c-opt/obj/dm/field/impls/shell/dmfieldshell.o CC arch-linux-c-opt/obj/dm/impls/swarm/data_ex.o CC arch-linux-c-opt/obj/dm/impls/swarm/data_bucket.o CC arch-linux-c-opt/obj/dm/field/impls/da/dmfieldda.o CC arch-linux-c-opt/obj/dm/label/dmlabel.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarm_migrate.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_da.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_sort.o CC arch-linux-c-opt/obj/dm/impls/swarm/f90-custom/zswarmf90.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-custom/zswarm.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_plex.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_view.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarm_migratef.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmpicf.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmf.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarm.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic.o CC arch-linux-c-opt/obj/dm/impls/forest/ftn-auto/forestf.o CC arch-linux-c-opt/obj/dm/impls/shell/ftn-auto/dmshellf.o CC arch-linux-c-opt/obj/dm/impls/shell/ftn-custom/zdmshellf.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/dspacelagrange.o CC arch-linux-c-opt/obj/dm/impls/shell/dmshell.o CC arch-linux-c-opt/obj/dm/field/impls/ds/dmfieldds.o CC arch-linux-c-opt/obj/dm/impls/forest/forest.o CC arch-linux-c-opt/obj/dm/impls/stag/stagintern.o CC arch-linux-c-opt/obj/dm/impls/stag/stag1d.o CC arch-linux-c-opt/obj/dm/impls/stag/stagda.o CC arch-linux-c-opt/obj/dm/impls/stag/stag.o CC arch-linux-c-opt/obj/dm/interface/dm.o CC arch-linux-c-opt/obj/dm/impls/stag/stagstencil.o CC arch-linux-c-opt/obj/dm/impls/stag/stagmulti.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcgns.o CC arch-linux-c-opt/obj/dm/impls/plex/plexadapt.o CC arch-linux-c-opt/obj/dm/impls/plex/plexceed.o CC arch-linux-c-opt/obj/dm/impls/stag/stagutils.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcoarsen.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcheckinterface.o CC arch-linux-c-opt/obj/dm/impls/plex/plexegads.o CC arch-linux-c-opt/obj/dm/impls/plex/plexegadslite.o CC arch-linux-c-opt/obj/dm/impls/plex/plexextrude.o CC arch-linux-c-opt/obj/dm/impls/stag/stag2d.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgenerate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfvm.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfluent.o CC arch-linux-c-opt/obj/dm/impls/plex/plexexodusii.o CC arch-linux-c-opt/obj/dm/impls/plex/plexdistribute.o CC arch-linux-c-opt/obj/dm/impls/plex/plexglvis.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5xdmf.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhpddm.o CC arch-linux-c-opt/obj/dm/impls/plex/plexindices.o CC arch-linux-c-opt/obj/dm/impls/plex/plexmed.o CC arch-linux-c-opt/obj/dm/impls/plex/plexmetric.o CC arch-linux-c-opt/obj/dm/impls/stag/stag3d.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgeometry.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcreate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexnatural.o CC arch-linux-c-opt/obj/dm/impls/plex/plexinterpolate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpoint.o CC arch-linux-c-opt/obj/dm/impls/plex/plexply.o CC arch-linux-c-opt/obj/dm/impls/plex/plexrefine.o CC arch-linux-c-opt/obj/dm/impls/plex/plexorient.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgmsh.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfpack.o CC arch-linux-c-opt/obj/dm/impls/plex/plexreorder.o CC arch-linux-c-opt/obj/dm/impls/plex/plexproject.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpreallocate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsection.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpartition.o CC arch-linux-c-opt/obj/dm/impls/plex/pointqueue.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexf90.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexfemf90.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexgeometryf90.o CC arch-linux-c-opt/obj/dm/impls/plex/plexvtk.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexsectionf90.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsfc.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/ftn-auto/plextransformf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/ftn-auto/plextrextrudef.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/1d/plexref1d.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/plexrefregular.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/ftn-auto/plexrefregularf.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfem.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/bl/plexrefbl.o CC arch-linux-c-opt/obj/dm/impls/plex/plexvtu.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/plextrextrude.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/alfeld/plexrefalfeld.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/tobox/plexreftobox.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcgnsf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/filter/plextrfilter.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcheckinterfacef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcreatef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexegadsf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/sbr/plexrefsbr.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexexodusiif.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexdistributef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfemf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfvmf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgeometryf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgmshf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexindicesf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexinterpolatef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexnaturalf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexorientf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpartitionf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexmetricf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpointf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexprojectf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexrefinef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexreorderf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsfcf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plextreef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsubmeshf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexcreate.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexdistribute.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexexodusii.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexextrude.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/plextransform.o CC arch-linux-c-opt/obj/dm/impls/plex/plex.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexfluent.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexgmsh.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexsubmesh.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkcreatef.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkmonitorf.o CC arch-linux-c-opt/obj/dm/impls/network/networkmonitor.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkf.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkviewf.o CC arch-linux-c-opt/obj/dm/impls/patch/ftn-auto/patchcreatef.o CC arch-linux-c-opt/obj/dm/impls/network/networkview.o CC arch-linux-c-opt/obj/dm/impls/patch/patchcreate.o CC arch-linux-c-opt/obj/dm/impls/network/networkcreate.o CC arch-linux-c-opt/obj/dm/impls/composite/f90-custom/zfddaf90.o CC arch-linux-c-opt/obj/dm/impls/composite/ftn-auto/packf.o CC arch-linux-c-opt/obj/dm/impls/composite/ftn-custom/zfddaf.o CC arch-linux-c-opt/obj/dm/impls/patch/patch.o CC arch-linux-c-opt/obj/dm/impls/composite/packm.o CC arch-linux-c-opt/obj/dm/impls/product/product.o CC arch-linux-c-opt/obj/dm/impls/redundant/ftn-auto/dmredundantf.o CC arch-linux-c-opt/obj/dm/impls/product/productutils.o CC arch-linux-c-opt/obj/dm/impls/sliced/sliced.o CC arch-linux-c-opt/obj/dm/impls/redundant/dmredundant.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsubmesh.o CC arch-linux-c-opt/obj/dm/impls/da/da1.o CC arch-linux-c-opt/obj/dm/impls/da/dacorn.o CC arch-linux-c-opt/obj/dm/impls/composite/pack.o CC arch-linux-c-opt/obj/dm/impls/da/da.o CC arch-linux-c-opt/obj/dm/impls/da/dadestroy.o CC arch-linux-c-opt/obj/dm/impls/da/dadist.o CC arch-linux-c-opt/obj/dm/impls/da/dacreate.o CC arch-linux-c-opt/obj/dm/impls/da/dadd.o CC arch-linux-c-opt/obj/dm/impls/plex/plextree.o CC arch-linux-c-opt/obj/dm/impls/da/da2.o CC arch-linux-c-opt/obj/dm/impls/da/dageometry.o CC arch-linux-c-opt/obj/dm/impls/da/daghost.o CC arch-linux-c-opt/obj/dm/impls/da/dagtona.o CC arch-linux-c-opt/obj/dm/impls/da/dagtol.o CC arch-linux-c-opt/obj/dm/impls/da/daindex.o CC arch-linux-c-opt/obj/dm/impls/da/dagetarray.o CC arch-linux-c-opt/obj/dm/impls/da/dagetelem.o CC arch-linux-c-opt/obj/dm/impls/da/daltol.o CC arch-linux-c-opt/obj/dm/impls/da/dapf.o CC arch-linux-c-opt/obj/dm/impls/da/dapreallocate.o CC arch-linux-c-opt/obj/dm/impls/da/dareg.o CC arch-linux-c-opt/obj/dm/impls/da/dascatter.o CC arch-linux-c-opt/obj/dm/impls/da/dalocal.o CC arch-linux-c-opt/obj/dm/impls/da/daview.o CC arch-linux-c-opt/obj/dm/impls/da/dasub.o CC arch-linux-c-opt/obj/dm/impls/da/f90-custom/zda1f90.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda1f.o CC arch-linux-c-opt/obj/dm/impls/da/gr1.o CC arch-linux-c-opt/obj/dm/impls/network/network.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda2f.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda3f.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdacornf.o CC arch-linux-c-opt/obj/dm/impls/da/grglvis.o CC arch-linux-c-opt/obj/dm/impls/da/da3.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdagetscatterf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaindexf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdasubf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaghostf.o CC arch-linux-c-opt/obj/dm/impls/da/gr2.o CC arch-linux-c-opt/obj/dm/impls/da/dainterp.o CC arch-linux-c-opt/obj/dm/impls/da/grvtk.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacornf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaviewf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacreatef.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daddf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dageometryf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dadistf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetarrayf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetelemf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtolf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daindexf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtonaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dalocalf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dainterpf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dapreallocatef.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dasubf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/fddaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/gr1f.o CC arch-linux-c-opt/obj/dm/partitioner/interface/partitionerreg.o CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-custom/zpartitioner.o CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-auto/partitionerf.o CC arch-linux-c-opt/obj/dm/partitioner/impls/chaco/partchaco.o CC arch-linux-c-opt/obj/dm/partitioner/impls/gather/partgather.o CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/ftn-auto/partshellf.o CC arch-linux-c-opt/obj/dm/partitioner/interface/partitioner.o CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/partshell.o CC arch-linux-c-opt/obj/dm/partitioner/impls/ptscotch/partptscotch.o CC arch-linux-c-opt/obj/dm/partitioner/impls/parmetis/partparmetis.o CC arch-linux-c-opt/obj/dm/partitioner/impls/matpart/partmatpart.o CC arch-linux-c-opt/obj/ksp/pc/interface/pcregis.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpcsetf.o CC arch-linux-c-opt/obj/ksp/pc/interface/pcset.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/pcsetf.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpreconf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mat/ftn-auto/pcmatf.o CC arch-linux-c-opt/obj/dm/partitioner/impls/simple/partsimple.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/preconf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mat/pcmat.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/fmg.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgfuncf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/smg.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgadapt.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgfunc.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgfuncf.o CC arch-linux-c-opt/obj/ksp/pc/impls/wb/ftn-auto/wbf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/gdsw.o CC arch-linux-c-opt/obj/ksp/pc/interface/precon.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-auto/bjacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-custom/zbjacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/ftn-auto/pckspf.o CC arch-linux-c-opt/obj/ksp/pc/impls/none/none.o CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/pcksp.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-auto/gasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-custom/zgasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/python/pythonpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/python/ftn-custom/zpythonpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/sor/ftn-auto/sorf.o CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/ftn-auto/hmgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/kaczmarz/kaczmarz.o CC arch-linux-c-opt/obj/ksp/pc/impls/sor/sor.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/ftn-auto/pcisf.o CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/hmg.o CC arch-linux-c-opt/obj/dm/impls/da/fdda.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mg.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/bjacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/pcis.o CC arch-linux-c-opt/obj/ksp/pc/impls/wb/wb.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/nn/nn.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/aggf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-custom/zgamgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/gamgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/util.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-auto/shellpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/ftn-auto/redistributef.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-custom/zshellpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/geo.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/gasm.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/shellpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/agg.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/classical.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/ftn-auto/deflationf.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/bitmask.o CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/redistribute.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/tfs.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflation.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/comm.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/gamg.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/ivec.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflationspace.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xxt.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factimpl.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/lu/lu.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/gs.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/ftn-auto/choleskyf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/qr/qr.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xyt.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factor.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-custom/zluf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-auto/factorf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/cholesky.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/icc/icc.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ilu/ilu.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-custom/zbddcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-auto/bddcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcnullspace.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-auto/fieldsplitf.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-custom/zfieldsplitf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcscalingbasic.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-custom/zcompositef.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-auto/compositef.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/composite.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcfetidp.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_coarsedm.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/ftn-auto/telescopef.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcgraph.o CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/ftn-auto/redundantf.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope.o CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/redundant.o CC arch-linux-c-opt/obj/ksp/pc/impls/lsc/lsc.o CC arch-linux-c-opt/obj/ksp/pc/impls/svd/svd.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddc.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_dmda.o CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/lmvmpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/ftn-auto/lmvmpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-auto/asmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/ftn-auto/jacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-custom/zasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mpi/pcmpi.o CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/jacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/ftn-auto/galerkinf.o CC arch-linux-c-opt/obj/ksp/pc/impls/cp/cp.o CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/galerkin.o CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/ftn-auto/eisenf.o CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/eisen.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/fieldsplit.o CC arch-linux-c-opt/obj/ksp/pc/impls/vpbjacobi/vpbjacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcschurs.o CC arch-linux-c-opt/obj/ksp/ksp/interface/dlregisksp.o CC arch-linux-c-opt/obj/ksp/ksp/interface/dmksp.o CC arch-linux-c-opt/obj/ksp/pc/impls/pbjacobi/pbjacobi.o CC arch-linux-c-opt/obj/ksp/ksp/interface/iguess.o CC arch-linux-c-opt/obj/ksp/ksp/interface/eige.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itcreate.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/asm.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itregis.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itres.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itcl.o CC arch-linux-c-opt/obj/ksp/ksp/interface/xmon.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zdmkspf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/ziguess.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitclf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitcreatef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zxonf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitfuncf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/f90-custom/zitfuncf90.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/eigef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itclf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iguessf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itcreatef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/iterativ.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iterativf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itresf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itfuncf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/kspmatregi.o CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/ftn-auto/schurmf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbadbrdnf.o CC arch-linux-c-opt/obj/ksp/pc/impls/patch/pcpatch.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itfunc.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmimpl.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmutils.o CC arch-linux-c-opt/obj/ksp/ksp/utils/dmproject.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbadbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/ftn-auto/dfpf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/schurm.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/ftn-auto/diagbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/badbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/dfp.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/brdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/ftn-auto/lmvmutilsf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/brdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/badbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/ftn-auto/bfgsf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/ftn-auto/sr1f.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/diagbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/ftn-auto/fischerf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/ftn-auto/dmprojectf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/bfgs.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/sr1.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmpre.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cgs/cgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lcd/lcd.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/fischer.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmreig.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmpref.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-custom/zgmres2f.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/pod/pod.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-auto/modpcff.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/modpcf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/lgmres/lgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pgmres/pgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/ftn-auto/pipefgmresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/fgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresleja.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tsirm/tsirm.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresdeflation.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/ftn-auto/lsqrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/pipefgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/python/pythonksp.o CC arch-linux-c-opt/obj/ksp/ksp/impls/python/ftn-custom/zpythonkspf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/lsqr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/ftn-auto/bcgslf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresorthog.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bicg/bicg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/dgmres/dgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/ftn-auto/minresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgtype.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/ftn-auto/gltrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgeig.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgls.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/bcgsl.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg/pipecg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/ftn-auto/cgtypef.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/stcg/stcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/minres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecgrr/pipecgrr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgne/cgne.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/groppcg/groppcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/gltr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/ftn-auto/fcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/ftn-auto/pipefcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipeprcg/pipeprcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/nash/nash.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg2/pipecg2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/ftn-auto/richscalef.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/richscale.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipelcg/pipelcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/ftn-auto/qcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/fcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/pipefcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/betas.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tfqmr/tfqmr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/rich.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/ftn-auto/chebyf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/qcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/bcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/qmrcgs/qmrcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgs/fbcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/ftn-auto/fetidpf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/symmlq/symmlq.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/ftn-auto/pipegcrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/ftn-auto/gcrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/pipebcgs/pipebcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgsr/fbcgsr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/gcr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/preonly/preonly.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/pipecr/pipecr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/cheby.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/cr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tcqmr/tcqmr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/pipegcr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/ibcgs/ibcgs.o CC arch-linux-c-opt/obj/snes/utils/dmlocalsnes.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmdasnesf.o CC arch-linux-c-opt/obj/snes/utils/convest.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmlocalsnesf.o CC arch-linux-c-opt/obj/snes/utils/dmsnes.o CC arch-linux-c-opt/obj/snes/utils/dmdasnes.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmsnesf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/fetidp.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/convestf.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmadaptf.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmplexsnesf.o CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearchregi.o CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-custom/zlinesearchf.o CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-auto/linesearchf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/ftn-auto/linesearchbtf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/ftn-custom/zlinesearchshellf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/linesearchshell.o CC arch-linux-c-opt/obj/snes/utils/dmadapt.o CC arch-linux-c-opt/obj/snes/linesearch/impls/basic/linesearchbasic.o CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearch.o CC arch-linux-c-opt/obj/snes/linesearch/impls/cp/linesearchcp.o CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/linesearchbt.o CC arch-linux-c-opt/obj/snes/interface/dlregissnes.o CC arch-linux-c-opt/obj/snes/linesearch/impls/nleqerr/linesearchnleqerr.o CC arch-linux-c-opt/obj/snes/linesearch/impls/l2/linesearchl2.o CC arch-linux-c-opt/obj/snes/interface/snesj2.o CC arch-linux-c-opt/obj/snes/interface/snesj.o CC arch-linux-c-opt/obj/snes/interface/snesregi.o CC arch-linux-c-opt/obj/snes/interface/snespc.o CC arch-linux-c-opt/obj/snes/interface/snesob.o CC arch-linux-c-opt/obj/snes/interface/noise/snesdnest.o CC arch-linux-c-opt/obj/snes/interface/f90-custom/zsnesf90.o CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snespcf.o CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snesf.o CC arch-linux-c-opt/obj/snes/interface/noise/snesmfj2.o CC arch-linux-c-opt/obj/snes/interface/noise/snesnoise.o CC arch-linux-c-opt/obj/snes/interface/snesut.o CC arch-linux-c-opt/obj/snes/impls/qn/ftn-auto/qnf.o CC arch-linux-c-opt/obj/snes/utils/dmplexsnes.o CC arch-linux-c-opt/obj/snes/interface/ftn-custom/zsnesf.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasf.o CC arch-linux-c-opt/obj/snes/impls/fas/fasgalerkin.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasgalerkinf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcprivate.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasfuncf.o CC arch-linux-c-opt/obj/snes/impls/qn/qn.o CC arch-linux-c-opt/obj/snes/impls/ntrdc/ftn-auto/ntrdcf.o CC arch-linux-c-opt/obj/snes/impls/shell/snesshell.o CC arch-linux-c-opt/obj/snes/impls/shell/ftn-custom/zsnesshellf.o CC arch-linux-c-opt/obj/snes/impls/shell/ftn-auto/snesshellf.o CC arch-linux-c-opt/obj/snes/impls/fas/fasfunc.o CC arch-linux-c-opt/obj/snes/impls/richardson/snesrichardson.o CC arch-linux-c-opt/obj/snes/impls/composite/ftn-auto/snescompositef.o CC arch-linux-c-opt/obj/snes/impls/gs/ftn-auto/snesgsf.o CC arch-linux-c-opt/obj/snes/impls/ntrdc/ntrdc.o CC arch-linux-c-opt/obj/snes/impls/gs/gssecant.o CC arch-linux-c-opt/obj/snes/impls/gs/snesgs.o CC arch-linux-c-opt/obj/snes/impls/tr/ftn-auto/trf.o CC arch-linux-c-opt/obj/snes/impls/fas/fas.o CC arch-linux-c-opt/obj/snes/impls/vi/ss/ftn-auto/vissf.o CC arch-linux-c-opt/obj/snes/impls/vi/ftn-auto/vif.o CC arch-linux-c-opt/obj/snes/impls/patch/snespatch.o CC arch-linux-c-opt/obj/snes/impls/vi/rs/ftn-auto/virsf.o CC arch-linux-c-opt/obj/snes/impls/multiblock/ftn-auto/multiblockf.o CC arch-linux-c-opt/obj/snes/impls/ksponly/ksponly.o CC arch-linux-c-opt/obj/snes/impls/vi/ss/viss.o CC arch-linux-c-opt/obj/snes/impls/vi/vi.o CC arch-linux-c-opt/obj/snes/impls/tr/tr.o CC arch-linux-c-opt/obj/snes/impls/composite/snescomposite.o CC arch-linux-c-opt/obj/snes/impls/nasm/aspin.o CC arch-linux-c-opt/obj/snes/impls/vi/rs/virs.o CC arch-linux-c-opt/obj/snes/impls/nasm/ftn-auto/nasmf.o CC arch-linux-c-opt/obj/snes/impls/ngmres/ftn-auto/snesngmresf.o CC arch-linux-c-opt/obj/snes/impls/multiblock/multiblock.o CC arch-linux-c-opt/obj/snes/impls/ngmres/anderson.o CC arch-linux-c-opt/obj/snes/impls/python/ftn-custom/zpythonsf.o CC arch-linux-c-opt/obj/snes/impls/python/pythonsnes.o CC arch-linux-c-opt/obj/snes/impls/ngmres/ngmresfunc.o CC arch-linux-c-opt/obj/snes/interface/snes.o CC arch-linux-c-opt/obj/snes/impls/ncg/ftn-auto/snesncgf.o CC arch-linux-c-opt/obj/snes/impls/ngmres/snesngmres.o CC arch-linux-c-opt/obj/snes/impls/ls/ls.o CC arch-linux-c-opt/obj/snes/mf/ftn-auto/snesmfjf.o CC arch-linux-c-opt/obj/snes/mf/snesmfj.o CC arch-linux-c-opt/obj/snes/impls/ncg/snesncg.o CC arch-linux-c-opt/obj/snes/impls/nasm/nasm.o CC arch-linux-c-opt/obj/snes/impls/ms/ms.o CC arch-linux-c-opt/obj/ts/utils/dmnetworkts.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-custom/zlandaucreate.o CC arch-linux-c-opt/obj/ts/utils/dmdats.o CC arch-linux-c-opt/obj/ts/utils/dmlocalts.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-auto/plexlandf.o CC arch-linux-c-opt/obj/ts/event/ftn-auto/tseventf.o CC arch-linux-c-opt/obj/ts/utils/ftn-auto/dmplextsf.o CC arch-linux-c-opt/obj/ts/utils/dmplexts.o CC arch-linux-c-opt/obj/ts/utils/tsconvest.o CC arch-linux-c-opt/obj/ts/utils/dmts.o CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-custom/ztrajf.o CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-auto/trajf.o CC arch-linux-c-opt/obj/ts/trajectory/utils/reconstruct.o CC arch-linux-c-opt/obj/ts/trajectory/impls/singlefile/singlefile.o CC arch-linux-c-opt/obj/ts/trajectory/impls/visualization/trajvisualization.o CC arch-linux-c-opt/obj/ts/trajectory/impls/basic/trajbasic.o CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-custom/ztsadaptf.o CC arch-linux-c-opt/obj/ts/event/tsevent.o CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-auto/tsadaptf.o CC arch-linux-c-opt/obj/ts/trajectory/interface/traj.o CC arch-linux-c-opt/obj/ts/adapt/impls/history/adapthist.o CC arch-linux-c-opt/obj/ts/adapt/impls/history/ftn-auto/adapthistf.o CC arch-linux-c-opt/obj/ts/adapt/impls/none/adaptnone.o CC arch-linux-c-opt/obj/ts/adapt/impls/glee/adaptglee.o CC arch-linux-c-opt/obj/ts/adapt/impls/basic/adaptbasic.o CC arch-linux-c-opt/obj/ts/adapt/impls/cfl/adaptcfl.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-custom/zadaptdspf.o CC arch-linux-c-opt/obj/ts/adapt/interface/tsadapt.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-auto/adaptdspf.o CC arch-linux-c-opt/obj/ts/interface/tscreate.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/adaptdsp.o CC arch-linux-c-opt/obj/ts/interface/dlregists.o CC arch-linux-c-opt/obj/ts/trajectory/impls/memory/trajmemory.o CC arch-linux-c-opt/obj/ts/interface/tsreg.o CC arch-linux-c-opt/obj/ts/interface/tseig.o CC arch-linux-c-opt/obj/ts/interface/tshistory.o CC arch-linux-c-opt/obj/ts/interface/tsregall.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztscreatef.o CC arch-linux-c-opt/obj/ts/interface/tsrhssplit.o CC arch-linux-c-opt/obj/ts/interface/sensitivity/ftn-auto/tssenf.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsregf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-custom/zrkf.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsf.o CC arch-linux-c-opt/obj/ts/interface/ftn-auto/tsf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-auto/rkf.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-custom/zsspf.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-auto/sspf.o CC arch-linux-c-opt/obj/ts/impls/explicit/euler/euler.o CC arch-linux-c-opt/obj/ts/interface/sensitivity/tssen.o CC arch-linux-c-opt/obj/ts/interface/tsmon.o CC arch-linux-c-opt/obj/ts/impls/rosw/ftn-custom/zroswf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/mrk.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ssp.o CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-auto/arkimexf.o CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-custom/zarkimexf.o CC arch-linux-c-opt/obj/ts/impls/pseudo/ftn-auto/posindepf.o CC arch-linux-c-opt/obj/ts/impls/pseudo/posindep.o CC arch-linux-c-opt/obj/ts/impls/python/pythonts.o CC arch-linux-c-opt/obj/ts/impls/symplectic/basicsymplectic/basicsymplectic.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/rk.o CC arch-linux-c-opt/obj/ts/impls/python/ftn-custom/zpythontf.o CC arch-linux-c-opt/obj/ts/impls/eimex/eimex.o CC arch-linux-c-opt/obj/ts/impls/implicit/theta/ftn-auto/thetaf.o CC arch-linux-c-opt/obj/ts/impls/mimex/mimex.o CC arch-linux-c-opt/obj/ts/impls/rosw/rosw.o CC arch-linux-c-opt/obj/ts/impls/glee/glee.o CC arch-linux-c-opt/obj/ts/interface/ts.o CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glleadapt.o CC arch-linux-c-opt/obj/ts/impls/arkimex/arkimex.o CC arch-linux-c-opt/obj/ts/impls/implicit/irk/irk.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha1f.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha1.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha2f.o CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/ftn-auto/tsdiscgradf.o CC arch-linux-c-opt/obj/ts/impls/bdf/ftn-auto/bdff.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha2.o CC arch-linux-c-opt/obj/ts/characteristic/interface/mocregis.o CC arch-linux-c-opt/obj/ts/characteristic/interface/ftn-auto/characteristicf.o CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/tsdiscgrad.o CC arch-linux-c-opt/obj/ts/characteristic/interface/slregis.o CC arch-linux-c-opt/obj/ts/impls/multirate/mprk.o CC arch-linux-c-opt/obj/ts/impls/implicit/theta/theta.o CC arch-linux-c-opt/obj/ts/characteristic/impls/da/slda.o CC arch-linux-c-opt/obj/ts/impls/bdf/bdf.o CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/ftn-auto/blmvmf.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnls/bqnls.o CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/blmvm.o CC arch-linux-c-opt/obj/tao/bound/utils/isutil.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/plexland.o CC arch-linux-c-opt/obj/tao/bound/impls/tron/tron.o CC arch-linux-c-opt/obj/ts/characteristic/interface/characteristic.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnls.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntl.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntr.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnkls.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktl.o CC arch-linux-c-opt/obj/tao/pde_constrained/impls/lcl/lcl.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnk.o CC arch-linux-c-opt/obj/tao/bound/impls/bncg/bncg.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktr.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/ftn-auto/bqnkf.o CC arch-linux-c-opt/obj/tao/shell/ftn-auto/taoshellf.o CC arch-linux-c-opt/obj/tao/shell/taoshell.o CC arch-linux-c-opt/obj/tao/matrix/submatfree.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnk.o CC arch-linux-c-opt/obj/tao/matrix/adamat.o CC arch-linux-c-opt/obj/tao/quadratic/impls/gpcg/gpcg.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/ftn-auto/almmutilsf.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almmutils.o CC arch-linux-c-opt/obj/tao/quadratic/impls/bqpip/bqpip.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-auto/admmf.o CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glle.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-custom/zadmmf.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssls.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssfls.o CC arch-linux-c-opt/obj/tao/linesearch/interface/dlregis_taolinesearch.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssils.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almm.o CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asfls.o CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asils.o CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-auto/taolinesearchf.o CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-custom/ztaolinesearchf.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/admm.o CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/ipm.o CC arch-linux-c-opt/obj/tao/linesearch/impls/gpcglinesearch/gpcglinesearch.o CC arch-linux-c-opt/obj/tao/linesearch/impls/unit/unit.o CC arch-linux-c-opt/obj/tao/linesearch/impls/morethuente/morethuente.o CC arch-linux-c-opt/obj/tao/snes/taosnes.o CC arch-linux-c-opt/obj/tao/linesearch/interface/taolinesearch.o CC arch-linux-c-opt/obj/tao/linesearch/impls/armijo/armijo.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-auto/brgnf.o CC arch-linux-c-opt/obj/tao/linesearch/impls/owarmijo/owarmijo.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-custom/zbrgnf.o CC arch-linux-c-opt/obj/tao/interface/dlregistao.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/gqt.o CC arch-linux-c-opt/obj/tao/interface/fdiff.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/brgn.o CC arch-linux-c-opt/obj/tao/interface/taosolver_bounds.o CC arch-linux-c-opt/obj/tao/interface/taosolverregi.o CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/pdipm.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_boundsf.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_hjf.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_fgf.o CC arch-linux-c-opt/obj/tao/interface/taosolver_fg.o CC arch-linux-c-opt/obj/tao/python/pythontao.o CC arch-linux-c-opt/obj/tao/python/ftn-custom/zpythontaof.o CC arch-linux-c-opt/obj/tao/interface/taosolver_hj.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolverf.o CC arch-linux-c-opt/obj/tao/interface/ftn-custom/ztaosolverf.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/lmvm/lmvm.o CC arch-linux-c-opt/obj/tao/interface/taosolver.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/owlqn/owlqn.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/neldermead/neldermead.o CC arch-linux-c-opt/obj/tao/util/ftn-auto/tao_utilf.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/cg/taocg.o FC arch-linux-c-opt/obj/sys/classes/bag/f2003-src/fsrc/bagenum.o FC arch-linux-c-opt/obj/sys/objects/f2003-src/fsrc/optionenum.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntr/ntr.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntl/ntl.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmswarmmod.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/bmrm/bmrm.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/nls/nls.o CC arch-linux-c-opt/obj/tao/util/tao_util.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmdamod.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/pounders.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmplexmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspdefmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petscpcmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspmod.o FC arch-linux-c-opt/obj/snes/f90-mod/petscsnesmod.o FC arch-linux-c-opt/obj/ts/f90-mod/petsctsmod.o FC arch-linux-c-opt/obj/tao/f90-mod/petsctaomod.o CLINKER arch-linux-c-opt/lib/libpetsc.so.3.019.2 *** Building SLEPc *** Checking environment... done Checking PETSc installation... done Generating Fortran stubs... done Checking LAPACK library... done Checking SCALAPACK... done Writing various configuration files... done ================================================================================ SLEPc Configuration ================================================================================ SLEPc directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc It is a git repository on branch: remotes/origin/jose/test-petsc-branch~2 SLEPc prefix directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt PETSc directory: /home/vrkaka/SLlibs/petsc It is a git repository on branch: main Architecture "arch-linux-c-opt" with double precision real numbers SCALAPACK from SCALAPACK linked by PETSc xxx==========================================================================xxx Configure stage complete. Now build the SLEPc library with: make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt xxx==========================================================================xxx ========================================== Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:20:55 +0300 Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using SLEPc directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc Using PETSc directory: /home/vrkaka/SLlibs/petsc Using PETSc arch: arch-linux-c-opt ----------------------------------------- SLEPC_VERSION_RELEASE 0 SLEPC_VERSION_MAJOR 3 SLEPC_VERSION_MINOR 19 SLEPC_VERSION_SUBMINOR 0 SLEPC_VERSION_DATE "unknown" SLEPC_VERSION_GIT "unknown" SLEPC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using SLEPc configure options: --prefix=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt Using SLEPc configuration flags: #define SLEPC_PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define SLEPC_PETSC_ARCH "arch-linux-c-opt" #define SLEPC_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc" #define SLEPC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define SLEPC_VERSION_GIT "v3.19.0-34-ga2e6dffce" #define SLEPC_VERSION_DATE_GIT "2023-05-09 07:30:59 +0000" #define SLEPC_VERSION_BRANCH_GIT "remotes/origin/jose/test-petsc-branch~2" #define SLEPC_HAVE_SCALAPACK 1 #define SLEPC_SCALAPACK_HAVE_UNDERSCORE 1 #define SLEPC_HAVE_PACKAGES ":scalapack:" ----------------------------------------- PETSC_VERSION_RELEASE 0 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 19 PETSC_VERSION_SUBMINOR 2 PETSC_VERSION_DATE "unknown" PETSC_VERSION_GIT "unknown" PETSC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using PETSc configure options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 Using PETSc configuration flags: #define PETSC_ARCH "arch-linux-c-opt" #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define PETSC_DIR_SEPARATOR '/' #define PETSC_FORTRAN_CHARLEN_T size_t #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_ATTRIBUTEALIGNED 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_ATOMIC 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_CXX_DIALECT_CXX20 1 #define PETSC_HAVE_DLADDR 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_EXECUTABLE_EXPORT 1 #define PETSC_HAVE_EXODUSII 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_HDF5 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MED 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_METIS 1 #define PETSC_HAVE_MKSTEMP 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_MPICH 1 #define PETSC_HAVE_MPICH_NUMVERSION 40101300 #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_LARGE_COUNT 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_MUMPS 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_NETCDF 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_OPENBLAS 1 #define PETSC_HAVE_OPENMP 1 #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" #define PETSC_HAVE_PNETCDF 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_POSIX_MEMALIGN 1 #define PETSC_HAVE_PTHREAD 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_HAVE_REGEX 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_SCALAPACK 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SLEPC 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SOWING 1 #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_STDATOMIC_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HDF5_HAVE_PARALLEL 1 #define PETSC_HDF5_HAVE_ZLIB 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "so" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED __attribute((unused)) #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MALLOC_COALESCED 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC_VERSION_BRANCH_GIT "main" #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 ----------------------------------------- Using C/C++ include paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp Using Fortran include/module paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp ----------------------------------------- Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 ----------------------------------------- Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -lslepc -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ ------------------------------------------ Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec ------------------------------------------ Using MAKE: /usr/bin/gmake Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc ========================================== /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= slepc_libs /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegen.py --petsc-arch=arch-linux-c-opt --pkg-dir=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc --pkg-name=slepc --pkg-pkgs=sys,eps,svd,pep,nep,mfn,lme --pkg-arch=arch-linux-c-opt CC arch-linux-c-opt/obj/sys/ftn-auto/slepcscf.o CC arch-linux-c-opt/obj/sys/ftn-auto/slepcinitf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_startf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_start.o CC arch-linux-c-opt/obj/sys/dlregisslepc.o CC arch-linux-c-opt/obj/sys/slepcutil.o CC arch-linux-c-opt/obj/sys/slepcinit.o CC arch-linux-c-opt/obj/sys/slepcsc.o CC arch-linux-c-opt/obj/sys/slepccontour.o Use "/usr/bin/gmake V=1" to see verbose compile lines, "/usr/bin/gmake V=0" to suppress. FC arch-linux-c-opt/obj/sys/f90-mod/slepcsysmod.o CC arch-linux-c-opt/obj/sys/vec/ftn-auto/vecutilf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepcutil.o CC arch-linux-c-opt/obj/sys/vec/pool.o CC arch-linux-c-opt/obj/sys/mat/ftn-auto/matutilf.o CC arch-linux-c-opt/obj/sys/vec/vecutil.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-custom/zpolygon.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-auto/rgpolygonf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/ftn-auto/rgringf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-custom/zellipse.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-auto/rgellipsef.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/rgellipse.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-custom/zinterval.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-auto/rgintervalf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/rgring.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgregis.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/rgpolygon.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-auto/rgbasicf.o CC arch-linux-c-opt/obj/sys/mat/matutil.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-custom/zrgf.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgbasic.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/ftn-auto/fnphif.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/rginterval.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/ftn-auto/fncombinef.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/fnphi.o CC arch-linux-c-opt/obj/sys/vec/veccomp.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/ftn-custom/zrational.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/sqrt/fnsqrt.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/fnutil.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/fncombine.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/log/fnlog.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnregis.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-auto/fnbasicf.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-custom/zfnf.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/invsqrt/fninvsqrt.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/fnrational.o CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/ftn-auto/cayleyf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/ftn-auto/precondf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/cayley.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/ftn-auto/filterf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/precond.o CC arch-linux-c-opt/obj/sys/classes/st/impls/sinvert/sinvert.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filter.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnbasic.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shift/shift.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/shell.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-auto/shellf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-custom/zshell.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/exp/fnexp.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stregis.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsetf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stset.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stfuncf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-custom/zstf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stshellmat.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stslesf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stfunc.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stsles.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsolvef.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/ftn-auto/bvtensorf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stsolve.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/contiguous/contig.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbiorthog.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/mat/bvmat.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvblas.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/svec/svec.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/vecs/vecs.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvkrylov.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvfunc.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvregis.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/bvtensor.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbasic.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvcontour.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-custom/zbvf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbiorthogf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbasicf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvcontourf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvfuncf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvglobalf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvkrylovf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvopsf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvorthogf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvops.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filtlan.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvglobal.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvlapack.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/ftn-auto/dshsvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/ftn-auto/dssvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/dsutil.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvorthog.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-auto/dspepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-custom/zdspepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/ftn-auto/dsnepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghep/dsghep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhepts/dsnhepts.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/dssvd.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gnhep/dsgnhep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/dspep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhep/dsnhep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/dshsvd.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/dsnep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/hz.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dmerg2.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dlaed3m.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/ftn-auto/dsgsvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsbtdc.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsrtdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dibtdc.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsbasicf.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsbasic.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-custom/zdsf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/invit.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsopsf.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsops.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsprivf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/dshep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/dsghiep.o CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/ftn-auto/lobpcgf.o CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/ftn-auto/rqcgf.o CC arch-linux-c-opt/obj/eps/impls/lyapii/ftn-auto/lyapiif.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dspriv.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/dsgsvd.o CC arch-linux-c-opt/obj/eps/impls/subspace/subspace.o CC arch-linux-c-opt/obj/eps/impls/external/scalapack/scalapack.o CC arch-linux-c-opt/obj/eps/impls/lapack/lapack.o CC arch-linux-c-opt/obj/eps/impls/ciss/ftn-auto/cissf.o CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/rqcg.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdschm.o CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/lobpcg.o CC arch-linux-c-opt/obj/eps/impls/davidson/davidson.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdtestconv.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdinitv.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdgd2.o CC arch-linux-c-opt/obj/eps/impls/lyapii/lyapii.o CC arch-linux-c-opt/obj/eps/impls/davidson/jd/ftn-auto/jdf.o CC arch-linux-c-opt/obj/eps/impls/davidson/gd/ftn-auto/gdf.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdcalcpairs.o CC arch-linux-c-opt/obj/eps/impls/davidson/gd/gd.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdutils.o CC arch-linux-c-opt/obj/eps/impls/davidson/jd/jd.o CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/ftn-auto/lanczosf.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdupdatev.o CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/ftn-auto/arnoldif.o CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/arnoldi.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-indef.o CC arch-linux-c-opt/obj/eps/impls/krylov/epskrylov.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdimprovex.o CC arch-linux-c-opt/obj/eps/impls/ciss/ciss.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-custom/zkrylovschurf.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.o CC arch-linux-c-opt/obj/eps/impls/power/ftn-auto/powerf.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-twosided.o CC arch-linux-c-opt/obj/eps/interface/dlregiseps.o CC arch-linux-c-opt/obj/eps/interface/epsbasic.o CC arch-linux-c-opt/obj/eps/interface/epsregis.o CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/lanczos.o CC arch-linux-c-opt/obj/eps/interface/epsdefault.o CC arch-linux-c-opt/obj/eps/interface/epsmon.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/krylovschur.o CC arch-linux-c-opt/obj/eps/interface/epsopts.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsbasicf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsdefaultf.o CC arch-linux-c-opt/obj/eps/interface/epssetup.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsmonf.o CC arch-linux-c-opt/obj/eps/impls/power/power.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssetupf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsviewf.o CC arch-linux-c-opt/obj/eps/interface/epssolve.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsoptsf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssolvef.o CC arch-linux-c-opt/obj/eps/interface/ftn-custom/zepsf.o CC arch-linux-c-opt/obj/svd/impls/lanczos/ftn-auto/gklanczosf.o CC arch-linux-c-opt/obj/svd/impls/cross/ftn-auto/crossf.o CC arch-linux-c-opt/obj/eps/interface/epsview.o CC arch-linux-c-opt/obj/svd/impls/external/scalapack/svdscalap.o CC arch-linux-c-opt/obj/svd/impls/randomized/rsvd.o CC arch-linux-c-opt/obj/svd/impls/trlanczos/ftn-auto/trlanczosf.o CC arch-linux-c-opt/obj/svd/impls/cyclic/ftn-auto/cyclicf.o CC arch-linux-c-opt/obj/svd/interface/dlregissvd.o CC arch-linux-c-opt/obj/svd/interface/svdbasic.o CC arch-linux-c-opt/obj/svd/impls/lapack/svdlapack.o CC arch-linux-c-opt/obj/svd/impls/lanczos/gklanczos.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-slice.o CC arch-linux-c-opt/obj/svd/interface/svddefault.o CC arch-linux-c-opt/obj/svd/impls/cross/cross.o CC arch-linux-c-opt/obj/svd/interface/svdregis.o CC arch-linux-c-opt/obj/svd/interface/svdmon.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdbasicf.o CC arch-linux-c-opt/obj/svd/interface/svdopts.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svddefaultf.o CC arch-linux-c-opt/obj/svd/interface/svdsetup.o CC arch-linux-c-opt/obj/svd/interface/svdsolve.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdmonf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdoptsf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsetupf.o CC arch-linux-c-opt/obj/svd/interface/ftn-custom/zsvdf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsolvef.o CC arch-linux-c-opt/obj/svd/interface/svdview.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdviewf.o CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/ftn-auto/qarnoldif.o CC arch-linux-c-opt/obj/pep/impls/peputils.o CC arch-linux-c-opt/obj/svd/impls/cyclic/cyclic.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/qslicef.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-custom/zstoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/pepkrylov.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/stoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ftn-auto/ptoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/qarnoldi.o CC arch-linux-c-opt/obj/pep/impls/linear/ftn-auto/linearf.o CC arch-linux-c-opt/obj/pep/impls/linear/qeplin.o CC arch-linux-c-opt/obj/pep/impls/jd/ftn-auto/pjdf.o CC arch-linux-c-opt/obj/pep/interface/dlregispep.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/stoar.o CC arch-linux-c-opt/obj/pep/interface/pepbasic.o CC arch-linux-c-opt/obj/pep/interface/pepmon.o CC arch-linux-c-opt/obj/pep/impls/linear/linear.o CC arch-linux-c-opt/obj/pep/interface/pepdefault.o CC arch-linux-c-opt/obj/svd/impls/trlanczos/trlanczos.o CC arch-linux-c-opt/obj/pep/interface/pepregis.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ptoar.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepbasicf.o CC arch-linux-c-opt/obj/pep/interface/pepopts.o CC arch-linux-c-opt/obj/pep/interface/pepsetup.o CC arch-linux-c-opt/obj/pep/interface/pepsolve.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepdefaultf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepmonf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepoptsf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsetupf.o CC arch-linux-c-opt/obj/pep/interface/ftn-custom/zpepf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepviewf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsolvef.o CC arch-linux-c-opt/obj/pep/interface/peprefine.o CC arch-linux-c-opt/obj/pep/interface/pepview.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/qslice.o CC arch-linux-c-opt/obj/nep/impls/slp/ftn-auto/slpf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-custom/znleigsf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigs-fullbf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigsf.o CC arch-linux-c-opt/obj/nep/impls/interpol/ftn-auto/interpolf.o CC arch-linux-c-opt/obj/nep/impls/slp/slp.o CC arch-linux-c-opt/obj/nep/impls/narnoldi/ftn-auto/narnoldif.o CC arch-linux-c-opt/obj/nep/impls/slp/slp-twosided.o CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs-fullb.o CC arch-linux-c-opt/obj/nep/impls/interpol/interpol.o CC arch-linux-c-opt/obj/nep/impls/rii/ftn-auto/riif.o CC arch-linux-c-opt/obj/nep/interface/dlregisnep.o CC arch-linux-c-opt/obj/nep/impls/narnoldi/narnoldi.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/nrefine.o CC arch-linux-c-opt/obj/nep/interface/nepdefault.o CC arch-linux-c-opt/obj/nep/interface/nepregis.o CC arch-linux-c-opt/obj/nep/impls/rii/rii.o CC arch-linux-c-opt/obj/nep/interface/nepbasic.o CC arch-linux-c-opt/obj/nep/interface/nepmon.o CC arch-linux-c-opt/obj/pep/impls/jd/pjd.o CC arch-linux-c-opt/obj/nep/interface/nepresolv.o CC arch-linux-c-opt/obj/nep/interface/nepopts.o CC arch-linux-c-opt/obj/nep/impls/nepdefl.o CC arch-linux-c-opt/obj/nep/interface/nepsetup.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepdefaultf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepbasicf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepmonf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepoptsf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepresolvf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsetupf.o CC arch-linux-c-opt/obj/nep/interface/nepsolve.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsolvef.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepviewf.o CC arch-linux-c-opt/obj/nep/interface/ftn-custom/znepf.o CC arch-linux-c-opt/obj/mfn/interface/dlregismfn.o CC arch-linux-c-opt/obj/mfn/impls/krylov/mfnkrylov.o CC arch-linux-c-opt/obj/nep/interface/nepview.o CC arch-linux-c-opt/obj/nep/interface/neprefine.o CC arch-linux-c-opt/obj/mfn/interface/mfnmon.o CC arch-linux-c-opt/obj/mfn/interface/mfnregis.o CC arch-linux-c-opt/obj/mfn/impls/expokit/mfnexpokit.o CC arch-linux-c-opt/obj/mfn/interface/mfnopts.o CC arch-linux-c-opt/obj/mfn/interface/mfnbasic.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnbasicf.o CC arch-linux-c-opt/obj/mfn/interface/mfnsolve.o CC arch-linux-c-opt/obj/mfn/interface/mfnsetup.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnmonf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnoptsf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsetupf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsolvef.o CC arch-linux-c-opt/obj/mfn/interface/ftn-custom/zmfnf.o CC arch-linux-c-opt/obj/lme/interface/dlregislme.o CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs.o CC arch-linux-c-opt/obj/lme/interface/lmeregis.o CC arch-linux-c-opt/obj/lme/interface/lmemon.o CC arch-linux-c-opt/obj/lme/impls/krylov/lmekrylov.o CC arch-linux-c-opt/obj/lme/interface/lmebasic.o CC arch-linux-c-opt/obj/lme/interface/lmeopts.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmemonf.o CC arch-linux-c-opt/obj/lme/interface/lmesetup.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmebasicf.o CC arch-linux-c-opt/obj/lme/interface/lmesolve.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmeoptsf.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesolvef.o CC arch-linux-c-opt/obj/lme/interface/lmedense.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesetupf.o CC arch-linux-c-opt/obj/lme/interface/ftn-custom/zlmef.o FC arch-linux-c-opt/obj/sys/classes/rg/f90-mod/slepcrgmod.o FC arch-linux-c-opt/obj/sys/classes/bv/f90-mod/slepcbvmod.o FC arch-linux-c-opt/obj/sys/classes/fn/f90-mod/slepcfnmod.o FC arch-linux-c-opt/obj/lme/f90-mod/slepclmemod.o FC arch-linux-c-opt/obj/sys/classes/ds/f90-mod/slepcdsmod.o FC arch-linux-c-opt/obj/sys/classes/st/f90-mod/slepcstmod.o FC arch-linux-c-opt/obj/mfn/f90-mod/slepcmfnmod.o FC arch-linux-c-opt/obj/eps/f90-mod/slepcepsmod.o FC arch-linux-c-opt/obj/svd/f90-mod/slepcsvdmod.o FC arch-linux-c-opt/obj/pep/f90-mod/slepcpepmod.o FC arch-linux-c-opt/obj/nep/f90-mod/slepcnepmod.o CLINKER arch-linux-c-opt/lib/libslepc.so.3.019.0 Now to install the library do: make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc install ========================================= *** Installing SLEPc *** *** Installing SLEPc at prefix location: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt *** ==================================== Install complete. Now to check if the libraries are working do (in current directory): make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check ==================================== /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc install-builtafterslepc /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc slepc4py-install gmake[6]: Nothing to be done for 'slepc4py-install'. ========================================= Now to check if the libraries are working do: make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check ========================================= and here is the cmake message when configuring the project: vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ cmake .. -- The CXX compiler identification is GNU 11.3.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- MPI headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- MPI library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmpich.so -- GMSH HEADERS NOT FOUND (OPTIONAL) -- GMSH LIBRARY NOT FOUND (OPTIONAL) -- Ipopt headers found at /home/vrkaka/Ipopt/installation/include/coin-or -- Ipopt library found at /home/vrkaka/Ipopt/installation/lib/libipopt.so -- Blas header cblas.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Blas library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libopenblas.so -- Metis headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Metis library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmetis.so -- Mumps headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Mumps library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libcmumps.a -- Petsc header petsc.h found at /home/vrkaka/SLlibs/petsc/include -- Petsc header petscconf.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Petsc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libpetsc.so -- Slepc headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Slepc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libslepc.so -- Configuring done -- Generating done -- Build files have been written to: /home/vrkaka/sparselizardipopt/build After that building the project with cmake goes fine and a simple mpi test works -Kalle -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Thu Jun 8 07:56:02 2023 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 8 Jun 2023 07:56:02 -0500 Subject: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT In-Reply-To: References: Message-ID: It means the mpiexec in your original command line vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ mpiexec -np 2 ./simulations/default/default 1e2 was not /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec Try to use the full path or add /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin in your PATH --Junchao Zhang On Thu, Jun 8, 2023 at 12:31?AM Kalle Karhap?? (TAU) wrote: > Thanks Barry, > > > > > > make check works: > > > > Running check examples to verify correct installation > > Using PETSC_DIR=/home/vrkaka/SLlibs/petsc and PETSC_ARCH=arch-linux-c-opt > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > > C/C++ example src/snes/tutorials/ex19 run successfully with mumps > > C/C++ example src/vec/vec/tests/ex47 run successfully with hdf5 > > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process > > Running check examples to verify correct installation > > Using > SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc, > PETSC_DIR=/home/vrkaka/SLlibs/petsc, and PETSC_ARCH=arch-linux-c-opt > > C/C++ example src/eps/tests/test10 run successfully with 1 MPI process > > C/C++ example src/eps/tests/test10 run successfully with 2 MPI process > > Fortran example src/eps/tests/test7f run successfully with 1 MPI process > > Completed SLEPc test examples > > Completed PETSc test examples > > > > > > make getmpiexec gives: > > > > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec > > > > which is the mpiexec petsc built > > > > > > *From:* Barry Smith > *Sent:* keskiviikko 7. kes?kuuta 2023 17.33 > *To:* Kalle Karhap?? (TAU) > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] PMI/MPI error when running MPICH from PETSc > with sparselizard/IPOPT > > > > > > Does > > > > make check > > > > work in the PETSc directory? > > > > Is it possible the mpiexec in "mpiexec -np 2 ./simulations/default/default > 1e2" is not the mpiexec built by PETSc? > > > > In the PETSc directory you can run > > > > make getmpiexec > > > > to see what mpiexec PETSc built. > > > > > > > > > > > > On Jun 7, 2023, at 6:07 AM, Kalle Karhap?? (TAU) > wrote: > > > > Hi! > > > I am using petsc in a topology optimization project with sparselizard and > ipopt. > > > > > > I am hoping to use mpich to run sparselizard/ipopt calculations faster, > but I?m getting the following error straight away: > > > > vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ mpiexec -np 2 > ./simulations/default/default 1e2 > > [proxy:0:0 at WKS-101259-LT] HYD_pmcd_pmi_parse_pmi_cmd > (pm/pmiserv/common.c:57): [proxy:0:0 at WKS-101259-LT] handle_pmi_cmd > (pm/pmiserv/pmip_cb.c:115): unable to parse PMI command > > [proxy:0:0 at WKS-101259-LT] pmi_cb (pm/pmiserv/pmip_cb.c:362): unable to > handle PMI command > > [proxy:0:0 at WKS-101259-LT] HYDT_dmxu_poll_wait_for_event > (tools/demux/demux_poll.c:76): callback returned error status > > [proxy:0:0 at WKS-101259-LT] main (pm/pmiserv/pmip.c:169): demux engine > error waiting for event > > > > the problem persists with different numbers of cores -np 1?10. > > Sometimes after the previous message there is the bonus error: > > > > Fatal error in internal_Init: Other MPI error, error stack: > > internal_Init(66): MPI_Init(argc=(nil), argv=(nil)) failed > > internal_Init(46): Cannot call MPI_INIT or MPI_INIT_THREAD more than once > > > > > > > > In petsc configuration I am downloading mpich. Then I?m building the > sparselizard project with the same mpich downloaded through petsc > installation. > > > > here is my petsc conf: > > ./configure --with-openmp --download-mpich --download-mumps > --download-scalapack --download-openblas --download-slepc --download-metis > --download-med --download-hdf5 --download-zlib --download-netcdf > --download-pnetcdf --download-exodusii --with-scalar-type=real > --with-debugging=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3'; > > > > > > > > petsc install went as follows: > > > > vrkaka at WKS-101259-LT:~/sparselizardipopt/install_external_libs$ > ./install_petsc.sh > > mkdir: cannot create directory ?/home/vrkaka/SLlibs?: File exists > > __________________________________________ > > FETCHING THE LATEST PETSC VERSION FROM GIT > > Cloning into 'petsc'... > > remote: Enumerating objects: 1097079, done. > > remote: Counting objects: 100% (687/687), done. > > remote: Compressing objects: 100% (144/144), done. > > remote: Total 1097079 (delta 555), reused 664 (delta 539), pack-reused > 1096392 > > Receiving objects: 100% (1097079/1097079), 344.72 MiB | 7.14 MiB/s, done. > > Resolving deltas: 100% (840415/840415), done. > > __________________________________________ > > CONFIGURING PETSC > > > ============================================================================================= > > Configuring PETSc to compile on your system > > > ============================================================================================= > > > ============================================================================================= > > Trying to download > > > https://github.com/pmodels/mpich/releases/download/v4.1.1/mpich-4.1.1.tar.gz > for MPICH > > > ============================================================================================= > > > ============================================================================================= > > Running configure on MPICH; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make on MPICH; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make install on MPICH; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://bitbucket.org/petsc/pkg-sowing.git for > SOWING > > > ============================================================================================= > > > ============================================================================================= > > Running configure on SOWING; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make on SOWING; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make install on SOWING; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Running arch-linux-c-opt/bin/bfort to generate Fortran > stubs > > > ============================================================================================= > > > ============================================================================================= > > Trying to download http://www.zlib.net/zlib-1.2.13.tar.gz for > ZLIB > > > ============================================================================================= > > > ============================================================================================= > > Building and installing zlib; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download > > > https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.2/src/hdf5-1.12.2.tar.bz2 > > for HDF5 > > > ============================================================================================= > > > ============================================================================================= > > Running configure on HDF5; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make on HDF5; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make install on HDF5; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://github.com/parallel-netcdf/pnetcdf for > PNETCDF > > > ============================================================================================= > > > ============================================================================================= > > Running libtoolize on PNETCDF; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Running autoreconf on PNETCDF; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Running configure on PNETCDF; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make on PNETCDF; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make install on PNETCDF; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download > https://github.com/Unidata/netcdf-c/archive/v4.9.1.tar.gz for NETCDF > > > ============================================================================================= > > > ============================================================================================= > > Running configure on NETCDF; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make on NETCDF; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Running make install on NETCDF; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://bitbucket.org/petsc/pkg-med.git for > MED > > > ============================================================================================= > > > ============================================================================================= > > Configuring MED with CMake; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Compiling and installing MED; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://github.com/gsjaardema/seacas.git for > EXODUSII > > > ============================================================================================= > > > ============================================================================================= > > Configuring EXODUSII with CMake; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Compiling and installing EXODUSII; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://bitbucket.org/petsc/pkg-metis.git for > METIS > > > ============================================================================================= > > > ============================================================================================= > > Configuring METIS with CMake; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Compiling and installing METIS; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://github.com/xianyi/OpenBLAS.git for > OPENBLAS > > > ============================================================================================= > > > ============================================================================================= > > Compiling OpenBLAS; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Installing OpenBLAS > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://github.com/Reference-ScaLAPACK/scalapack for > SCALAPACK > > > ============================================================================================= > > > ============================================================================================= > > Configuring SCALAPACK with CMake; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Compiling and installing SCALAPACK; this may take several > minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download > https://graal.ens-lyon.fr/MUMPS/MUMPS_5.6.0.tar.gz for MUMPS > > > ============================================================================================= > > > ============================================================================================= > > Compiling MUMPS; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Installing MUMPS; this may take several minutes > > > ============================================================================================= > > > ============================================================================================= > > Trying to download https://gitlab.com/slepc/slepc.git for > SLEPC > > > ============================================================================================= > > > ============================================================================================= > > SLEPc examples are available at > arch-linux-c-opt/externalpackages/git.slepc > > export SLEPC_DIR=arch-linux-c-opt > > > ============================================================================================= > > Compilers: > > C Compiler: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -Wall -Wwrite-strings > -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow > -fstack-protector -fvisibility=hidden -O3 -fopenmp > > Version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > > C++ Compiler: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden > -O3 -std=gnu++20 -fopenmp > > Version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > > Fortran Compiler: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -Wall > -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch > -Wno-unused-dummy-argument -O3 -fopenmp > > Version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > > Linkers: > > Shared linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc > -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector > -fvisibility=hidden -O3 > > Dynamic linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc > -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector > -fvisibility=hidden -O3 > > Libraries linked against: > > BlasLapack: > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas > > uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to > control the number of threads > > uses 4 byte integers > > MPI: > > Version: 4 > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec > > Implementation: mpich4 > > MPICH_NUMVERSION: 40101300 > > MPICH: > > python: > > Executable: /usr/bin/python3 > > openmp: > > Version: 201511 > > pthread: > > cmake: > > Version: 3.22.1 > > Executable: /usr/bin/cmake > > openblas: > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas > > uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to > control the number of threads > > zlib: > > Version: 1.2.13 > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lz > > hdf5: > > Version: 1.12.2 > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lhdf5_hl -lhdf5 > > netcdf: > > Version: 4.9.1 > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lnetcdf > > pnetcdf: > > Version: 1.12.3 > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lpnetcdf > > metis: > > Version: 5.1.0 > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmetis > > slepc: > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lslepc > > regex: > > MUMPS: > > Version: 5.6.0 > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -ldmumps -lmumps_common > -lpord -lpthread > > uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to > control the number of threads > > scalapack: > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lscalapack > > exodusii: > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lexoIIv2for32 -lexodus > > med: > > Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmedC -lmed > > sowing: > > Version: 1.1.26 > > Executable: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/bfort > > PETSc: > > Language used to compile PETSc: C > > PETSC_ARCH: arch-linux-c-opt > > PETSC_DIR: /home/vrkaka/SLlibs/petsc > > Prefix: > > Scalar type: real > > Precision: double > > Support for __float128 > > Integer size: 4 bytes > > Single library: yes > > Shared libraries: yes > > Memory alignment from malloc(): 16 bytes > > Using GNU make: /usr/bin/gmake > > > xxx=========================================================================xxx > > Configure stage complete. Now build PETSc libraries with: > > make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt all > > > xxx=========================================================================xxx > > __________________________________________ > > COMPILING PETSC > > /usr/bin/python3 ./config/gmakegen.py --petsc-arch=arch-linux-c-opt > > /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegentest.py > --petsc-dir=/home/vrkaka/SLlibs/petsc --petsc-arch=arch-linux-c-opt > --testdir=./arch-linux-c-opt/tests > > make: '/home/vrkaka/SLlibs/petsc' is up to date. > > make: 'arch-linux-c-opt' is up to date. > > /home/vrkaka/SLlibs/petsc/lib/petsc/bin/petscnagupgrade.py:14: > DeprecationWarning: The distutils package is deprecated and slated for > removal in Python 3.12. Use setuptools or check PEP 632 for potential > alternatives > > from distutils.version import LooseVersion as Version > > ========================================== > > > > See documentation/faq.html and documentation/bugreporting.html > > for help with installation problems. Please send EVERYTHING > > printed out below when reporting problems. Please check the > > mailing list archives and consider subscribing. > > > > https://petsc.org/release/community/mailing/ > > > > ========================================== > > Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:19:10 +0300 > > Machine characteristics: Linux WKS-101259-LT > 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 > x86_64 x86_64 x86_64 GNU/Linux > > ----------------------------------------- > > Using PETSc directory: /home/vrkaka/SLlibs/petsc > > Using PETSc arch: arch-linux-c-opt > > ----------------------------------------- > > PETSC_VERSION_RELEASE 0 > > PETSC_VERSION_MAJOR 3 > > PETSC_VERSION_MINOR 19 > > PETSC_VERSION_SUBMINOR 2 > > PETSC_VERSION_DATE "unknown" > > PETSC_VERSION_GIT "unknown" > > PETSC_VERSION_DATE_GIT "unknown" > > ----------------------------------------- > > Using configure Options: --with-openmp --download-mpich --download-mumps > --download-scalapack --download-openblas --download-slepc --download-metis > --download-med --download-hdf5 --download-zlib --download-netcdf > --download-pnetcdf --download-exodusii --with-scalar-type=real > --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 > > Using configuration flags: > > #define PETSC_ARCH "arch-linux-c-opt" > > #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) > > #define PETSC_BLASLAPACK_UNDERSCORE 1 > > #define PETSC_CLANGUAGE_C 1 > > #define PETSC_CXX_RESTRICT __restrict > > #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) > > #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) > > #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) > > #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) > > #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" > > #define PETSC_DIR_SEPARATOR '/' > > #define PETSC_FORTRAN_CHARLEN_T size_t > > #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 > > #define PETSC_FUNCTION_NAME_C __func__ > > #define PETSC_FUNCTION_NAME_CXX __func__ > > #define PETSC_HAVE_ACCESS 1 > > #define PETSC_HAVE_ATOLL 1 > > #define PETSC_HAVE_ATTRIBUTEALIGNED 1 > > #define PETSC_HAVE_BUILTIN_EXPECT 1 > > #define PETSC_HAVE_BZERO 1 > > #define PETSC_HAVE_C99_COMPLEX 1 > > #define PETSC_HAVE_CLOCK 1 > > #define PETSC_HAVE_CXX 1 > > #define PETSC_HAVE_CXX_ATOMIC 1 > > #define PETSC_HAVE_CXX_COMPLEX 1 > > #define PETSC_HAVE_CXX_COMPLEX_FIX 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX11 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX14 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX17 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX20 1 > > #define PETSC_HAVE_DLADDR 1 > > #define PETSC_HAVE_DLCLOSE 1 > > #define PETSC_HAVE_DLERROR 1 > > #define PETSC_HAVE_DLFCN_H 1 > > #define PETSC_HAVE_DLOPEN 1 > > #define PETSC_HAVE_DLSYM 1 > > #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 > > #define PETSC_HAVE_DRAND48 1 > > #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 > > #define PETSC_HAVE_ERF 1 > > #define PETSC_HAVE_EXECUTABLE_EXPORT 1 > > #define PETSC_HAVE_EXODUSII 1 > > #define PETSC_HAVE_FCNTL_H 1 > > #define PETSC_HAVE_FENV_H 1 > > #define PETSC_HAVE_FE_VALUES 1 > > #define PETSC_HAVE_FLOAT_H 1 > > #define PETSC_HAVE_FORK 1 > > #define PETSC_HAVE_FORTRAN 1 > > #define PETSC_HAVE_FORTRAN_FLUSH 1 > > #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 > > #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 > > #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 > > #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 > > #define PETSC_HAVE_GETCWD 1 > > #define PETSC_HAVE_GETDOMAINNAME 1 > > #define PETSC_HAVE_GETHOSTBYNAME 1 > > #define PETSC_HAVE_GETHOSTNAME 1 > > #define PETSC_HAVE_GETPAGESIZE 1 > > #define PETSC_HAVE_GETRUSAGE 1 > > #define PETSC_HAVE_HDF5 1 > > #define PETSC_HAVE_IMMINTRIN_H 1 > > #define PETSC_HAVE_INTTYPES_H 1 > > #define PETSC_HAVE_ISINF 1 > > #define PETSC_HAVE_ISNAN 1 > > #define PETSC_HAVE_ISNORMAL 1 > > #define PETSC_HAVE_LGAMMA 1 > > #define PETSC_HAVE_LOG2 1 > > #define PETSC_HAVE_LSEEK 1 > > #define PETSC_HAVE_MALLOC_H 1 > > #define PETSC_HAVE_MED 1 > > #define PETSC_HAVE_MEMMOVE 1 > > #define PETSC_HAVE_METIS 1 > > #define PETSC_HAVE_MKSTEMP 1 > > #define PETSC_HAVE_MMAP 1 > > #define PETSC_HAVE_MPICH 1 > > #define PETSC_HAVE_MPICH_NUMVERSION 40101300 > > #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 > > #define PETSC_HAVE_MPIIO 1 > > #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 > > #define PETSC_HAVE_MPI_COMBINER_DUP 1 > > #define PETSC_HAVE_MPI_COMBINER_NAMED 1 > > #define PETSC_HAVE_MPI_F90MODULE 1 > > #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 > > #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 > > #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 > > #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 > > #define PETSC_HAVE_MPI_INIT_THREAD 1 > > #define PETSC_HAVE_MPI_INT64_T 1 > > #define PETSC_HAVE_MPI_LARGE_COUNT 1 > > #define PETSC_HAVE_MPI_LONG_DOUBLE 1 > > #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 > > #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 > > #define PETSC_HAVE_MPI_ONE_SIDED 1 > > #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 > > #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 > > #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 > > #define PETSC_HAVE_MPI_RGET 1 > > #define PETSC_HAVE_MPI_WIN_CREATE 1 > > #define PETSC_HAVE_MUMPS 1 > > #define PETSC_HAVE_NANOSLEEP 1 > > #define PETSC_HAVE_NETCDF 1 > > #define PETSC_HAVE_NETDB_H 1 > > #define PETSC_HAVE_NETINET_IN_H 1 > > #define PETSC_HAVE_OPENBLAS 1 > > #define PETSC_HAVE_OPENMP 1 > > #define PETSC_HAVE_PACKAGES > ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" > > #define PETSC_HAVE_PNETCDF 1 > > #define PETSC_HAVE_POPEN 1 > > #define PETSC_HAVE_POSIX_MEMALIGN 1 > > #define PETSC_HAVE_PTHREAD 1 > > #define PETSC_HAVE_PWD_H 1 > > #define PETSC_HAVE_RAND 1 > > #define PETSC_HAVE_READLINK 1 > > #define PETSC_HAVE_REALPATH 1 > > #define PETSC_HAVE_REAL___FLOAT128 1 > > #define PETSC_HAVE_REGEX 1 > > #define PETSC_HAVE_RTLD_GLOBAL 1 > > #define PETSC_HAVE_RTLD_LAZY 1 > > #define PETSC_HAVE_RTLD_LOCAL 1 > > #define PETSC_HAVE_RTLD_NOW 1 > > #define PETSC_HAVE_SCALAPACK 1 > > #define PETSC_HAVE_SETJMP_H 1 > > #define PETSC_HAVE_SLEEP 1 > > #define PETSC_HAVE_SLEPC 1 > > #define PETSC_HAVE_SNPRINTF 1 > > #define PETSC_HAVE_SOCKET 1 > > #define PETSC_HAVE_SOWING 1 > > #define PETSC_HAVE_SO_REUSEADDR 1 > > #define PETSC_HAVE_STDATOMIC_H 1 > > #define PETSC_HAVE_STDINT_H 1 > > #define PETSC_HAVE_STRCASECMP 1 > > #define PETSC_HAVE_STRINGS_H 1 > > #define PETSC_HAVE_STRUCT_SIGACTION 1 > > #define PETSC_HAVE_SYS_PARAM_H 1 > > #define PETSC_HAVE_SYS_PROCFS_H 1 > > #define PETSC_HAVE_SYS_RESOURCE_H 1 > > #define PETSC_HAVE_SYS_SOCKET_H 1 > > #define PETSC_HAVE_SYS_TIMES_H 1 > > #define PETSC_HAVE_SYS_TIME_H 1 > > #define PETSC_HAVE_SYS_TYPES_H 1 > > #define PETSC_HAVE_SYS_UTSNAME_H 1 > > #define PETSC_HAVE_SYS_WAIT_H 1 > > #define PETSC_HAVE_TAU_PERFSTUBS 1 > > #define PETSC_HAVE_TGAMMA 1 > > #define PETSC_HAVE_TIME 1 > > #define PETSC_HAVE_TIME_H 1 > > #define PETSC_HAVE_UNAME 1 > > #define PETSC_HAVE_UNISTD_H 1 > > #define PETSC_HAVE_USLEEP 1 > > #define PETSC_HAVE_VA_COPY 1 > > #define PETSC_HAVE_VSNPRINTF 1 > > #define PETSC_HAVE_XMMINTRIN_H 1 > > #define PETSC_HDF5_HAVE_PARALLEL 1 > > #define PETSC_HDF5_HAVE_ZLIB 1 > > #define PETSC_INTPTR_T intptr_t > > #define PETSC_INTPTR_T_FMT "#" PRIxPTR > > #define PETSC_IS_COLORING_MAX USHRT_MAX > > #define PETSC_IS_COLORING_VALUE_TYPE short > > #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 > > #define PETSC_LEVEL1_DCACHE_LINESIZE 64 > > #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" > > #define PETSC_MAX_PATH_LEN 4096 > > #define PETSC_MEMALIGN 16 > > #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch > -Wno-stringop-overflow -O3 > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath > -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags > -lmpi" > > #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT > > #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" > > #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA > > #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 > > #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 > > #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 > > #define PETSC_PYTHON_EXE "/usr/bin/python3" > > #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) > > #define PETSC_REPLACE_DIR_SEPARATOR '\\' > > #define PETSC_SIGNAL_CAST > > #define PETSC_SIZEOF_INT 4 > > #define PETSC_SIZEOF_LONG 8 > > #define PETSC_SIZEOF_LONG_LONG 8 > > #define PETSC_SIZEOF_SIZE_T 8 > > #define PETSC_SIZEOF_VOID_P 8 > > #define PETSC_SLSUFFIX "so" > > #define PETSC_UINTPTR_T uintptr_t > > #define PETSC_UINTPTR_T_FMT "#" PRIxPTR > > #define PETSC_UNUSED __attribute((unused)) > > #define PETSC_USE_AVX512_KERNELS 1 > > #define PETSC_USE_BACKWARD_LOOP 1 > > #define PETSC_USE_CTABLE 1 > > #define PETSC_USE_DMLANDAU_2D 1 > > #define PETSC_USE_INFO 1 > > #define PETSC_USE_ISATTY 1 > > #define PETSC_USE_LOG 1 > > #define PETSC_USE_MALLOC_COALESCED 1 > > #define PETSC_USE_PROC_FOR_SIZE 1 > > #define PETSC_USE_REAL_DOUBLE 1 > > #define PETSC_USE_SHARED_LIBRARIES 1 > > #define PETSC_USE_SINGLE_LIBRARY 1 > > #define PETSC_USE_SOCKET_VIEWER 1 > > #define PETSC_USE_VISIBILITY_C 1 > > #define PETSC_USE_VISIBILITY_CXX 1 > > #define PETSC_USING_64BIT_PTR 1 > > #define PETSC_USING_F2003 1 > > #define PETSC_USING_F90FREEFORM 1 > > #define PETSC_VERSION_BRANCH_GIT "main" > > #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" > > #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" > > #define PETSC__BSD_SOURCE 1 > > #define PETSC__DEFAULT_SOURCE 1 > > #define PETSC__GNU_SOURCE 1 > > ----------------------------------------- > > Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o > .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch > -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > > mpicc -show: gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath > -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags > -lmpi > > C compiler version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > > Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx > -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden > -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > > mpicxx -show: g++ -Wno-lto-type-mismatch -Wno-psabi -O3 -std=gnu++20 -fPIC > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpicxx -Wl,-rpath > -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags > -lmpi > > C++ compiler version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > > Using Fortran compile: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall > -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch > -Wno-unused-dummy-argument -O3 -fopenmp > -I/home/vrkaka/SLlibs/petsc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > > mpif90 -show: gfortran -fPIC -ffree-line-length-none -ffree-line-length-0 > -Wno-lto-type-mismatch -O3 -fallow-argument-mismatch > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpifort -Wl,-rpath > -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags > -lmpi > > Fortran compiler version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 > > ----------------------------------------- > > Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc > > Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector > -fvisibility=hidden -O3 > > Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 > > Using Fortran flags: -fopenmp -Wall -ffree-line-length-none > -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 > > ----------------------------------------- > > Using system modules: > > Using mpi.h: # 1 > "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include/mpi.h" 1 > > ----------------------------------------- > > Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 > -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord > -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC > -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi > -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ > > ------------------------------------------ > > Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec > > ------------------------------------------ > > Using MAKE: /usr/bin/gmake > > Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: > --no-print-directory -- PETSC_ARCH=arch-linux-c-opt > PETSC_DIR=/home/vrkaka/SLlibs/petsc > > ========================================== > > /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 > --output-sync=recurse V= libs > > FC arch-linux-c-opt/obj/sys/fsrc/somefort.o > > CXX arch-linux-c-opt/obj/sys/dll/cxx/demangle.o > > FC arch-linux-c-opt/obj/sys/f90-src/fsrc/f90_fwrap.o > > CC arch-linux-c-opt/obj/sys/f90-custom/zsysf90.o > > FC arch-linux-c-opt/obj/sys/f90-mod/petscsysmod.o > > CC arch-linux-c-opt/obj/sys/dll/dlimpl.o > > CC arch-linux-c-opt/obj/sys/dll/dl.o > > CC arch-linux-c-opt/obj/sys/dll/ftn-auto/regf.o > > CXX > arch-linux-c-opt/obj/sys/objects/device/impls/host/hostcontext.o > > CC arch-linux-c-opt/obj/sys/ftn-custom/zsys.o > > CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostdevice.o > > CC arch-linux-c-opt/obj/sys/ftn-custom/zutils.o > > CXX > arch-linux-c-opt/obj/sys/objects/device/interface/global_dcontext.o > > CC arch-linux-c-opt/obj/sys/dll/reg.o > > CC arch-linux-c-opt/obj/sys/logging/xmlviewer.o > > CC arch-linux-c-opt/obj/sys/logging/utils/stack.o > > CC arch-linux-c-opt/obj/sys/logging/utils/classlog.o > > CXX arch-linux-c-opt/obj/sys/objects/device/interface/device.o > > CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zpetscloghf.o > > CC arch-linux-c-opt/obj/sys/logging/utils/stagelog.o > > CC arch-linux-c-opt/obj/sys/logging/ftn-auto/xmllogeventf.o > > CC arch-linux-c-opt/obj/sys/logging/ftn-auto/plogf.o > > CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zplogf.o > > CC arch-linux-c-opt/obj/sys/logging/utils/eventlog.o > > CC arch-linux-c-opt/obj/sys/python/ftn-custom/zpythonf.o > > CC arch-linux-c-opt/obj/sys/utils/arch.o > > CXX arch-linux-c-opt/obj/sys/objects/device/interface/memory.o > > CC arch-linux-c-opt/obj/sys/python/pythonsys.o > > CC arch-linux-c-opt/obj/sys/utils/fhost.o > > CC arch-linux-c-opt/obj/sys/utils/fuser.o > > CC arch-linux-c-opt/obj/sys/utils/matheq.o > > CC arch-linux-c-opt/obj/sys/utils/mathclose.o > > CC arch-linux-c-opt/obj/sys/utils/mathfit.o > > CC arch-linux-c-opt/obj/sys/utils/mathinf.o > > CC arch-linux-c-opt/obj/sys/utils/ctable.o > > CC arch-linux-c-opt/obj/sys/utils/memc.o > > CC arch-linux-c-opt/obj/sys/utils/mpilong.o > > CC arch-linux-c-opt/obj/sys/logging/xmllogevent.o > > CC arch-linux-c-opt/obj/sys/utils/mpitr.o > > CC arch-linux-c-opt/obj/sys/utils/mpishm.o > > CC arch-linux-c-opt/obj/sys/utils/pbarrier.o > > CC arch-linux-c-opt/obj/sys/utils/mpiu.o > > CC arch-linux-c-opt/obj/sys/utils/psleep.o > > CC arch-linux-c-opt/obj/sys/utils/pdisplay.o > > CC arch-linux-c-opt/obj/sys/utils/psplit.o > > CC arch-linux-c-opt/obj/sys/utils/segbuffer.o > > CC arch-linux-c-opt/obj/sys/utils/mpimesg.o > > CC arch-linux-c-opt/obj/sys/utils/sortd.o > > CC arch-linux-c-opt/obj/sys/utils/sseenabled.o > > CC arch-linux-c-opt/obj/sys/utils/sortip.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zarchf.o > > CC arch-linux-c-opt/obj/sys/utils/mpits.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zfhostf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zsortsof.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zstrf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/memcf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpitsf.o > > CC arch-linux-c-opt/obj/sys/logging/plog.o > > CC arch-linux-c-opt/obj/sys/utils/str.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpiuf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psleepf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psplitf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortdf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortipf.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortsof.o > > CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortif.o > > CC arch-linux-c-opt/obj/sys/totalview/tv_data_display.o > > CC arch-linux-c-opt/obj/sys/objects/gcomm.o > > CC arch-linux-c-opt/obj/sys/objects/gcookie.o > > CC arch-linux-c-opt/obj/sys/objects/fcallback.o > > CC arch-linux-c-opt/obj/sys/objects/destroy.o > > CC arch-linux-c-opt/obj/sys/objects/gtype.o > > CC arch-linux-c-opt/obj/sys/utils/sorti.o > > CXX arch-linux-c-opt/obj/sys/objects/device/interface/dcontext.o > > CC arch-linux-c-opt/obj/sys/objects/olist.o > > CC arch-linux-c-opt/obj/sys/objects/garbage.o > > CC arch-linux-c-opt/obj/sys/objects/pgname.o > > CC arch-linux-c-opt/obj/sys/objects/package.o > > CC arch-linux-c-opt/obj/sys/objects/inherit.o > > CXX > arch-linux-c-opt/obj/sys/objects/device/interface/mark_dcontext.o > > CC arch-linux-c-opt/obj/sys/utils/sortso.o > > CC arch-linux-c-opt/obj/sys/objects/aoptions.o > > CC arch-linux-c-opt/obj/sys/objects/prefix.o > > CC arch-linux-c-opt/obj/sys/objects/init.o > > CC arch-linux-c-opt/obj/sys/objects/pname.o > > CC arch-linux-c-opt/obj/sys/objects/ptype.o > > CC arch-linux-c-opt/obj/sys/objects/state.o > > CC arch-linux-c-opt/obj/sys/objects/version.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/destroyf.o > > CC arch-linux-c-opt/obj/sys/objects/device/util/memory.o > > CC arch-linux-c-opt/obj/sys/objects/device/util/devicereg.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcommf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcookief.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/inheritf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/optionsf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/pinitf.o > > CC arch-linux-c-opt/obj/sys/objects/tagm.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/statef.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/subcommf.o > > CC arch-linux-c-opt/obj/sys/objects/subcomm.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-auto/tagmf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgcommf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zdestroyf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgtype.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zinheritf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsyamlf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpackage.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpgnamef.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpnamef.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zprefixf.o > > CC arch-linux-c-opt/obj/sys/objects/pinit.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zptypef.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstartf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zversionf.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstart.o > > CC arch-linux-c-opt/obj/sys/memory/mhbw.o > > CC arch-linux-c-opt/obj/sys/memory/mem.o > > CC arch-linux-c-opt/obj/sys/memory/ftn-auto/memf.o > > CC arch-linux-c-opt/obj/sys/memory/ftn-custom/zmtrf.o > > CC arch-linux-c-opt/obj/sys/memory/mal.o > > CC arch-linux-c-opt/obj/sys/memory/ftn-auto/mtrf.o > > CC arch-linux-c-opt/obj/sys/perfstubs/pstimer.o > > CC arch-linux-c-opt/obj/sys/error/errabort.o > > CC arch-linux-c-opt/obj/sys/error/checkptr.o > > CC arch-linux-c-opt/obj/sys/error/errstop.o > > CC arch-linux-c-opt/obj/sys/error/pstack.o > > CC arch-linux-c-opt/obj/sys/error/adebug.o > > CC arch-linux-c-opt/obj/sys/error/errtrace.o > > CC arch-linux-c-opt/obj/sys/error/fp.o > > CC arch-linux-c-opt/obj/sys/memory/mtr.o > > CC arch-linux-c-opt/obj/sys/error/signal.o > > CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsf.o > > CC arch-linux-c-opt/obj/sys/error/ftn-auto/adebugf.o > > CC arch-linux-c-opt/obj/sys/error/ftn-auto/checkptrf.o > > CC arch-linux-c-opt/obj/sys/objects/options.o > > CC arch-linux-c-opt/obj/sys/error/ftn-custom/zerrf.o > > CC arch-linux-c-opt/obj/sys/error/ftn-auto/errf.o > > CC arch-linux-c-opt/obj/sys/error/ftn-auto/fpf.o > > CC arch-linux-c-opt/obj/sys/error/ftn-auto/signalf.o > > CC arch-linux-c-opt/obj/sys/error/err.o > > CC arch-linux-c-opt/obj/sys/fileio/fpath.o > > CC arch-linux-c-opt/obj/sys/fileio/fdir.o > > CC arch-linux-c-opt/obj/sys/fileio/fwd.o > > CC arch-linux-c-opt/obj/sys/fileio/ghome.o > > CC arch-linux-c-opt/obj/sys/fileio/ftest.o > > CC arch-linux-c-opt/obj/sys/fileio/grpath.o > > CC arch-linux-c-opt/obj/sys/fileio/rpath.o > > CC arch-linux-c-opt/obj/sys/fileio/mpiuopen.o > > CC arch-linux-c-opt/obj/sys/fileio/smatlab.o > > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmpiuopenf.o > > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zghomef.o > > CC arch-linux-c-opt/obj/sys/fileio/fretrieve.o > > CC arch-linux-c-opt/obj/sys/fileio/ftn-auto/sysiof.o > > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmprintf.o > > CC arch-linux-c-opt/obj/sys/info/ftn-auto/verboseinfof.o > > CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zsysiof.o > > CC arch-linux-c-opt/obj/sys/info/ftn-custom/zverboseinfof.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/axis.o > > CC arch-linux-c-opt/obj/sys/fileio/mprint.o > > CC arch-linux-c-opt/obj/sys/info/verboseinfo.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/bars.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/cmap.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/image.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/axisc.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/dscatter.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/lg.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/zoom.o > > CC arch-linux-c-opt/obj/sys/fileio/sysio.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zlgcf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/hists.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zzoomf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zaxisf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/axiscf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/barsf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/lgc.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/dscatterf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/histsf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dcoor.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dclear.o > > CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgcf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dellipse.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dflush.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpause.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dline.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmarker.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmouse.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpoint.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawregall.o > > CC arch-linux-c-opt/obj/sys/objects/optionsyaml.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/drect.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawreg.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/draw.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtext.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawregf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtextf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dsave.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtrif.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtri.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dclearf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dcoorf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/interface/dviewp.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dellipsef.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dflushf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmousef.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmarkerf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dlinef.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpausef.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpointf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawregf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drectf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dsavef.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtextf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtrif.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dviewpf.o > > CC > arch-linux-c-opt/obj/sys/classes/draw/impls/null/ftn-auto/drawnullf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/drawnull.o > > CC arch-linux-c-opt/obj/sys/classes/random/interface/dlregisrand.o > > CC arch-linux-c-opt/obj/sys/classes/random/interface/random.o > > CC arch-linux-c-opt/obj/sys/classes/random/interface/randreg.o > > CC > arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomcf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/impls/tikz/tikz.o > > CC > arch-linux-c-opt/obj/sys/classes/random/interface/ftn-custom/zrandomf.o > > CC > arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomf.o > > CC arch-linux-c-opt/obj/sys/classes/random/interface/randomc.o > > CC arch-linux-c-opt/obj/sys/classes/random/impls/rand48/rand48.o > > CC arch-linux-c-opt/obj/sys/classes/random/impls/rand/rand.o > > CC arch-linux-c-opt/obj/sys/classes/bag/ftn-auto/bagf.o > > CC > arch-linux-c-opt/obj/sys/classes/random/impls/rander48/rander48.o > > CC arch-linux-c-opt/obj/sys/classes/bag/ftn-custom/zbagf.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dupl.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/flush.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/interface/dlregispetsc.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewa.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewers.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewasetf.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewregall.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/view.o > > CC arch-linux-c-opt/obj/sys/classes/bag/f90-custom/zbagf90.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewaf.o > > CC arch-linux-c-opt/obj/sys/classes/draw/impls/image/drawimage.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewregf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/ftn-auto/glvisf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-auto/drawvf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-custom/zdrawvf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-custom/zbinvf.o > > CC arch-linux-c-opt/obj/sys/classes/bag/bag.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-auto/binvf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/f90-custom/zbinvf90.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewreg.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/ftn-custom/zsendf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-auto/hdf5vf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/string/ftn-custom/zstringvf.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/stringv.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-custom/zhdf5f.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/drawv.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/send.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/ftn-custom/zvtkvf.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/glvis.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vu/petscvu.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/vtkv.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zvcreatef.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/filevf.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/vcreateaf.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/vcreatea.o > > CC > arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zfilevf.o > > CC arch-linux-c-opt/obj/sys/time/cputime.o > > CC arch-linux-c-opt/obj/sys/time/fdate.o > > CC arch-linux-c-opt/obj/sys/time/ftn-auto/cputimef.o > > CC arch-linux-c-opt/obj/sys/time/ftn-custom/zptimef.o > > CC arch-linux-c-opt/obj/sys/f90-src/f90_cwrap.o > > CC arch-linux-c-opt/obj/vec/pf/interface/pfall.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/hdf5v.o > > CC arch-linux-c-opt/obj/vec/pf/interface/ftn-custom/zpff.o > > CC arch-linux-c-opt/obj/vec/pf/interface/ftn-auto/pff.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/binv.o > > CC arch-linux-c-opt/obj/vec/pf/impls/constant/const.o > > CC arch-linux-c-opt/obj/vec/pf/interface/pf.o > > CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/filev.o > > CC arch-linux-c-opt/obj/vec/pf/impls/string/cstring.o > > CC arch-linux-c-opt/obj/vec/is/utils/isio.o > > CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zhdf5io.o > > CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zisltogf.o > > CC arch-linux-c-opt/obj/vec/is/utils/pmap.o > > CC arch-linux-c-opt/obj/vec/is/utils/hdf5io.o > > CC arch-linux-c-opt/obj/vec/is/utils/f90-custom/zisltogf90.o > > CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zvsectionisf.o > > CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/isltogf.o > > CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/pmapf.o > > CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/psortf.o > > CC > arch-linux-c-opt/obj/vec/is/is/utils/f90-custom/ziscoloringf90.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-custom/ziscoloringf.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isblockf.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/iscomp.o > > CC arch-linux-c-opt/obj/vec/is/utils/psort.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscompf.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscoloringf.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isdifff.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/isblock.o > > CC arch-linux-c-opt/obj/vec/is/is/interface/isreg.o > > CC arch-linux-c-opt/obj/vec/is/is/interface/isregall.o > > CC arch-linux-c-opt/obj/vec/is/is/interface/f90-custom/zindexf90.o > > CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/indexf.o > > CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-custom/zindexf.o > > CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/isregf.o > > CC arch-linux-c-opt/obj/vec/is/is/impls/stride/ftn-auto/stridef.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/isdiff.o > > CC arch-linux-c-opt/obj/vec/is/is/utils/iscoloring.o > > CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-custom/zblockf.o > > CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-auto/blockf.o > > FC arch-linux-c-opt/obj/vec/f90-mod/petscvecmod.o > > CC arch-linux-c-opt/obj/vec/is/is/impls/f90-custom/zblockf90.o > > CC arch-linux-c-opt/obj/vec/is/is/impls/stride/stride.o > > CC > arch-linux-c-opt/obj/vec/is/is/impls/general/ftn-auto/generalf.o > > CC > arch-linux-c-opt/obj/vec/is/section/interface/ftn-custom/zsectionf.o > > CC > arch-linux-c-opt/obj/vec/is/section/interface/f90-custom/zvsectionisf90.o > > CC > arch-linux-c-opt/obj/vec/is/section/interface/ftn-auto/sectionf.o > > CC arch-linux-c-opt/obj/vec/is/is/impls/block/block.o > > CC arch-linux-c-opt/obj/vec/is/ao/interface/aoreg.o > > CC arch-linux-c-opt/obj/vec/is/ao/interface/ao.o > > CC arch-linux-c-opt/obj/vec/is/ao/interface/aoregall.o > > CC arch-linux-c-opt/obj/vec/is/ao/interface/dlregisdm.o > > CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-auto/aof.o > > CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-custom/zaof.o > > CC > arch-linux-c-opt/obj/vec/is/ao/impls/basic/ftn-custom/zaobasicf.o > > CC arch-linux-c-opt/obj/vec/is/section/interface/sectionhdf5.o > > CC arch-linux-c-opt/obj/vec/is/is/impls/general/general.o > > CC arch-linux-c-opt/obj/vec/is/utils/isltog.o > > CC > arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-auto/aomappingf.o > > CC > arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-custom/zaomappingf.o > > CC arch-linux-c-opt/obj/vec/is/is/interface/index.o > > CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/aobasic.o > > CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-custom/zsfutilsf.o > > CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfcoordf.o > > CC arch-linux-c-opt/obj/vec/is/sf/utils/f90-custom/zsfutilsf90.o > > CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/aomapping.o > > CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfutilsf.o > > CC arch-linux-c-opt/obj/vec/is/sf/utils/sfcoord.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/dlregissf.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/sfregi.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zsf.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zvscat.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/sftype.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/sff.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/vscatf.o > > CC > arch-linux-c-opt/obj/vec/is/ao/impls/memscalable/aomemscalable.o > > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gather/sfgather.o > > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gatherv/sfgatherv.o > > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfmpi.o > > CC > arch-linux-c-opt/obj/vec/is/sf/impls/basic/alltoall/sfalltoall.o > > CC arch-linux-c-opt/obj/vec/is/sf/utils/sfutils.o > > CC > arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgather/sfallgather.o > > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfbasic.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/vscat.o > > CC > arch-linux-c-opt/obj/vec/is/sf/impls/basic/neighbor/sfneighbor.o > > CC arch-linux-c-opt/obj/vec/vec/utils/vecglvis.o > > CC arch-linux-c-opt/obj/vec/is/section/interface/section.o > > CC > arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgatherv/sfallgatherv.o > > CC arch-linux-c-opt/obj/vec/vec/utils/vecio.o > > CC arch-linux-c-opt/obj/vec/vec/utils/vecs.o > > CC > arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/dlregistagger.o > > CC arch-linux-c-opt/obj/vec/vec/utils/comb.o > > CC arch-linux-c-opt/obj/vec/is/sf/impls/window/sfwindow.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/tagger.o > > CC > arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/taggerregi.o > > CC > arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/ftn-auto/taggerf.o > > CC arch-linux-c-opt/obj/vec/vec/utils/vsection.o > > CC arch-linux-c-opt/obj/vec/vec/utils/projection.o > > CC arch-linux-c-opt/obj/vec/vec/utils/vecstash.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/absolute.o > > CC arch-linux-c-opt/obj/vec/is/sf/interface/sf.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/and.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/andor.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/or.o > > CC arch-linux-c-opt/obj/vec/vec/utils/f90-custom/zvsectionf90.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/relative.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/simple.o > > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/combf.o > > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/projectionf.o > > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/veciof.o > > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vsectionf.o > > CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vinvf.o > > CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/cdf.o > > CC arch-linux-c-opt/obj/vec/vec/interface/veccreate.o > > CC arch-linux-c-opt/obj/vec/vec/interface/vecregall.o > > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvecregf.o > > CC arch-linux-c-opt/obj/vec/vec/interface/dlregisvec.o > > CC arch-linux-c-opt/obj/vec/vec/interface/vecreg.o > > CC arch-linux-c-opt/obj/vec/vec/interface/f90-custom/zvectorf90.o > > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/veccreatef.o > > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/rvectorf.o > > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/vectorf.o > > CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvectorf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec3.o > > CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec1.o > > CC arch-linux-c-opt/obj/vec/vec/utils/vinv.o > > CC arch-linux-c-opt/obj/vec/vec/impls/seq/vseqcr.o > > CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-custom/zbvec2f.o > > CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-auto/vseqcrf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/shared/ftn-auto/shvecf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/shared/shvec.o > > CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-custom/zvecnestf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-auto/vecnestf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/commonmpvec.o > > CC arch-linux-c-opt/obj/vec/vec/impls/seq/dvec2.o > > CC arch-linux-c-opt/obj/vec/vec/interface/vector.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/vmpicr.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pvec2.o > > CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec2.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-custom/zpbvecf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/commonmpvecf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/vmpicrf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/pbvecf.o > > CC arch-linux-c-opt/obj/mat/coarsen/scoarsen.o > > CC arch-linux-c-opt/obj/mat/coarsen/ftn-auto/coarsenf.o > > CC arch-linux-c-opt/obj/mat/coarsen/ftn-custom/zcoarsenf.o > > CC arch-linux-c-opt/obj/vec/vec/interface/rvector.o > > CC arch-linux-c-opt/obj/mat/coarsen/coarsen.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pbvec.o > > CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/ftn-auto/miskf.o > > CC arch-linux-c-opt/obj/vec/vec/impls/nest/vecnest.o > > CC arch-linux-c-opt/obj/mat/color/utils/bipartite.o > > FC arch-linux-c-opt/obj/mat/f90-mod/petscmatmod.o > > CC arch-linux-c-opt/obj/mat/color/utils/valid.o > > CC arch-linux-c-opt/obj/mat/coarsen/impls/mis/mis.o > > CC arch-linux-c-opt/obj/mat/color/interface/matcoloring.o > > CC arch-linux-c-opt/obj/mat/color/interface/matcoloringregi.o > > CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/misk.o > > CC > arch-linux-c-opt/obj/mat/color/interface/ftn-custom/zmatcoloringf.o > > CC > arch-linux-c-opt/obj/mat/color/interface/ftn-auto/matcoloringf.o > > CC arch-linux-c-opt/obj/mat/color/utils/weights.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/degr.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/numsrt.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/dsm.o > > CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pdvec.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/ido.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/seq.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/setr.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/slo.o > > CC arch-linux-c-opt/obj/mat/color/impls/power/power.o > > CC arch-linux-c-opt/obj/mat/color/impls/minpack/color.o > > CC arch-linux-c-opt/obj/mat/color/impls/natural/natural.o > > CC arch-linux-c-opt/obj/mat/utils/bandwidth.o > > CC arch-linux-c-opt/obj/mat/utils/compressedrow.o > > CC arch-linux-c-opt/obj/mat/utils/convert.o > > CC arch-linux-c-opt/obj/mat/utils/freespace.o > > CC arch-linux-c-opt/obj/mat/coarsen/impls/hem/hem.o > > CC arch-linux-c-opt/obj/mat/utils/getcolv.o > > CC arch-linux-c-opt/obj/mat/utils/matio.o > > CC arch-linux-c-opt/obj/mat/utils/matstashspace.o > > CC arch-linux-c-opt/obj/mat/utils/axpy.o > > CC arch-linux-c-opt/obj/mat/color/impls/jp/jp.o > > CC arch-linux-c-opt/obj/mat/utils/pheap.o > > CC arch-linux-c-opt/obj/mat/utils/gcreate.o > > CC arch-linux-c-opt/obj/mat/utils/veccreatematdense.o > > CC arch-linux-c-opt/obj/mat/utils/overlapsplit.o > > CC arch-linux-c-opt/obj/mat/utils/zerodiag.o > > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/axpyf.o > > CC arch-linux-c-opt/obj/mat/utils/multequal.o > > CC arch-linux-c-opt/obj/mat/utils/zerorows.o > > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/bandwidthf.o > > CC arch-linux-c-opt/obj/mat/color/impls/greedy/greedy.o > > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/gcreatef.o > > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/getcolvf.o > > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/multequalf.o > > CC arch-linux-c-opt/obj/mat/utils/ftn-auto/zerodiagf.o > > CC arch-linux-c-opt/obj/mat/order/degree.o > > CC arch-linux-c-opt/obj/mat/order/fn1wd.o > > CC arch-linux-c-opt/obj/mat/order/fndsep.o > > CC arch-linux-c-opt/obj/mat/order/fnroot.o > > CC arch-linux-c-opt/obj/mat/order/gen1wd.o > > CC arch-linux-c-opt/obj/mat/order/gennd.o > > CC arch-linux-c-opt/obj/mat/order/genrcm.o > > CC arch-linux-c-opt/obj/mat/order/genqmd.o > > CC arch-linux-c-opt/obj/mat/order/qmdqt.o > > CC arch-linux-c-opt/obj/mat/order/qmdmrg.o > > CC arch-linux-c-opt/obj/mat/order/qmdrch.o > > CC arch-linux-c-opt/obj/mat/utils/matstash.o > > CC arch-linux-c-opt/obj/mat/order/qmdupd.o > > CC arch-linux-c-opt/obj/mat/order/rcm.o > > CC arch-linux-c-opt/obj/mat/order/rootls.o > > CC arch-linux-c-opt/obj/mat/order/sp1wd.o > > CC arch-linux-c-opt/obj/mat/order/spnd.o > > CC arch-linux-c-opt/obj/mat/order/spqmd.o > > CC arch-linux-c-opt/obj/mat/order/sprcm.o > > CC arch-linux-c-opt/obj/mat/order/wbm.o > > CC arch-linux-c-opt/obj/mat/order/sregis.o > > CC arch-linux-c-opt/obj/mat/order/ftn-custom/zsorderf.o > > CC arch-linux-c-opt/obj/mat/order/sorder.o > > CC arch-linux-c-opt/obj/mat/order/ftn-auto/spectralf.o > > CC arch-linux-c-opt/obj/mat/order/spectral.o > > CC arch-linux-c-opt/obj/mat/order/metisnd/metisnd.o > > CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatnullf.o > > CC arch-linux-c-opt/obj/mat/interface/matregis.o > > CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatregf.o > > CC arch-linux-c-opt/obj/mat/interface/matreg.o > > CC arch-linux-c-opt/obj/mat/interface/matnull.o > > CC arch-linux-c-opt/obj/mat/interface/dlregismat.o > > CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matnullf.o > > CC arch-linux-c-opt/obj/mat/interface/f90-custom/zmatrixf90.o > > CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matproductf.o > > CC arch-linux-c-opt/obj/mat/ftn-custom/zmat.o > > CC arch-linux-c-opt/obj/mat/matfd/ftn-custom/zfdmatrixf.o > > CC arch-linux-c-opt/obj/mat/matfd/ftn-auto/fdmatrixf.o > > CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matrixf.o > > CC arch-linux-c-opt/obj/mat/interface/matproduct.o > > CC arch-linux-c-opt/obj/mat/impls/transpose/transm.o > > CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatrixf.o > > CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/htransmf.o > > CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/transmf.o > > CC arch-linux-c-opt/obj/mat/impls/transpose/htransm.o > > CC arch-linux-c-opt/obj/mat/matfd/fdmatrix.o > > CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmf.o > > CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmhf.o > > CC arch-linux-c-opt/obj/mat/impls/python/ftn-custom/zpythonmf.o > > CC arch-linux-c-opt/obj/mat/impls/python/pythonmat.o > > CC arch-linux-c-opt/obj/mat/impls/sell/seq/fdsell.o > > CC arch-linux-c-opt/obj/mat/impls/sell/seq/ftn-custom/zsellf.o > > CC arch-linux-c-opt/obj/mat/impls/normal/normmh.o > > CC arch-linux-c-opt/obj/mat/impls/normal/normm.o > > CC arch-linux-c-opt/obj/mat/impls/is/ftn-auto/matisf.o > > CC arch-linux-c-opt/obj/mat/impls/shell/ftn-auto/shellf.o > > CC arch-linux-c-opt/obj/mat/impls/shell/ftn-custom/zshellf.o > > CC arch-linux-c-opt/obj/mat/impls/shell/shellcnv.o > > CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mmsell.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/aijsbaij.o > > CC arch-linux-c-opt/obj/mat/impls/shell/shell.o > > CC arch-linux-c-opt/obj/mat/impls/sell/seq/sell.o > > CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mpisell.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact10.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact3.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact11.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact12.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij2.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact4.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact5.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact6.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact7.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-custom/zsbaijf.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sro.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact8.o > > CC arch-linux-c-opt/obj/mat/impls/is/matis.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-auto/sbaijf.o > > CC > arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-custom/zmpisbaijf.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact9.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpiaijsbaij.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-auto/mpisbaijf.o > > CC arch-linux-c-opt/obj/mat/impls/kaij/ftn-auto/kaijf.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij.o > > CC arch-linux-c-opt/obj/mat/interface/matrix.o > > CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-custom/zmpiadjf.o > > CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-auto/mpiadjf.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mmsbaij.o > > CC arch-linux-c-opt/obj/mat/impls/diagonal/ftn-auto/diagonalf.o > > CC > arch-linux-c-opt/obj/mat/impls/scalapack/ftn-auto/matscalapackf.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/sbaijov.o > > CC arch-linux-c-opt/obj/mat/impls/lrc/ftn-auto/lrcf.o > > CC arch-linux-c-opt/obj/mat/impls/diagonal/diagonal.o > > CC arch-linux-c-opt/obj/mat/impls/lrc/lrc.o > > CC arch-linux-c-opt/obj/mat/impls/fft/ftn-custom/zfftf.o > > CC arch-linux-c-opt/obj/mat/impls/fft/fft.o > > CC arch-linux-c-opt/obj/mat/impls/dummy/matdummy.o > > CC arch-linux-c-opt/obj/mat/impls/submat/ftn-auto/submatf.o > > CC arch-linux-c-opt/obj/mat/impls/cdiagonal/ftn-auto/cdiagonalf.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact2.o > > CC arch-linux-c-opt/obj/mat/impls/submat/submat.o > > CC arch-linux-c-opt/obj/mat/impls/cdiagonal/cdiagonal.o > > CC arch-linux-c-opt/obj/mat/impls/maij/ftn-auto/maijf.o > > CC arch-linux-c-opt/obj/mat/impls/composite/ftn-auto/mcompositef.o > > CC arch-linux-c-opt/obj/mat/impls/adj/mpi/mpiadj.o > > CC arch-linux-c-opt/obj/mat/impls/nest/ftn-custom/zmatnestf.o > > CC arch-linux-c-opt/obj/mat/impls/nest/ftn-auto/matnestf.o > > CC arch-linux-c-opt/obj/mat/impls/kaij/kaij.o > > CC arch-linux-c-opt/obj/mat/impls/composite/mcomposite.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijhdf5.o > > CC arch-linux-c-opt/obj/mat/impls/scalapack/matscalapack.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/ij.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode2.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/fdaij.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmatmult.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matptap.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matrart.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/mattransposematmult.o > > CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpisbaij.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/symtranspose.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-custom/zaijf.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-auto/aijf.o > > CC arch-linux-c-opt/obj/mat/impls/nest/matnest.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/basfactor.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijsell/aijsell.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/crl/crl.o > > CC arch-linux-c-opt/obj/mat/impls/maij/maij.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijfact.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijperm/aijperm.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpb_aij.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaijpc.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/spbas.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmatmult.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimattransposematmult.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mmaij.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/fdmpiaij.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/ftn-auto/mumpsf.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijsell/mpiaijsell.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmult.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-auto/mpiaijf.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijperm/mpiaijperm.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-custom/zmpiaijf.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/crl/mcrl.o > > CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-custom/zdensef.o > > CC arch-linux-c-opt/obj/mat/impls/dense/seq/densehdf5.o > > CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-auto/densef.o > > CC arch-linux-c-opt/obj/mat/impls/aij/seq/aij.o > > CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mmdense.o > > CC > arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-custom/zmpidensef.o > > CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-auto/mpidensef.o > > CC > arch-linux-c-opt/obj/mat/impls/preallocator/ftn-auto/matpreallocatorf.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmult.o > > CC arch-linux-c-opt/obj/mat/impls/preallocator/matpreallocator.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/mffd.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/mfregis.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/mffddef.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/wp.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffddeff.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-custom/zmffdf.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/mumps.o > > CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mpidense.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/wpf.o > > CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffdf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/aijbaij.o > > CC arch-linux-c-opt/obj/mat/impls/dense/seq/dense.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiptap.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact11.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiov.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact13.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact3.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact2.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact4.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact81.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat1.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat11.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact9.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat14.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact7.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij2.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolv.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat2.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat3.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat15.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat4.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat5.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat6.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran1.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact5.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat7.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran2.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran3.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran4.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran5.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran6.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrann.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran7.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat1.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat2.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat3.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgedi.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat4.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat5.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa3.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat6.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat7.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa4.o > > CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaij.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa5.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa2.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-custom/zbaijf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa6.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-auto/baijf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa7.o > > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-auto/mpibaijf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-custom/zmpibaijf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpiaijbaij.o > > CC arch-linux-c-opt/obj/mat/impls/scatter/mscatter.o > > CC arch-linux-c-opt/obj/mat/impls/scatter/ftn-auto/mscatterf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpb_baij.o > > CC arch-linux-c-opt/obj/mat/impls/localref/ftn-auto/mlocalreff.o > > CC arch-linux-c-opt/obj/mat/impls/centering/ftn-auto/centeringf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij.o > > CC arch-linux-c-opt/obj/mat/impls/centering/centering.o > > CC arch-linux-c-opt/obj/mat/impls/localref/mlocalref.o > > CC arch-linux-c-opt/obj/mat/partition/spartition.o > > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mmbaij.o > > CC arch-linux-c-opt/obj/mat/partition/ftn-auto/partitionf.o > > CC arch-linux-c-opt/obj/mat/partition/ftn-custom/zpartitionf.o > > CC arch-linux-c-opt/obj/dm/dt/space/interface/ftn-auto/spacef.o > > CC arch-linux-c-opt/obj/mat/partition/partition.o > > CC arch-linux-c-opt/obj/dm/dt/space/interface/space.o > > CC > arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/ftn-auto/spaceptrimmedf.o > > CC > arch-linux-c-opt/obj/mat/partition/impls/hierarchical/hierarchical.o > > CC > arch-linux-c-opt/obj/dm/dt/space/impls/point/ftn-auto/spacepointf.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/spaceptrimmed.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/point/spacepoint.o > > CC > arch-linux-c-opt/obj/dm/dt/space/impls/tensor/ftn-auto/spacetensorf.o > > CC arch-linux-c-opt/obj/mat/impls/blockmat/seq/blockmat.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/ftn-auto/spacesumf.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/wxy/spacewxy.o > > CC > arch-linux-c-opt/obj/dm/dt/space/impls/subspace/ftn-auto/spacesubspacef.o > > CC > arch-linux-c-opt/obj/dm/dt/space/impls/poly/ftn-auto/spacepolyf.o > > CC arch-linux-c-opt/obj/dm/dt/fe/interface/feceed.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/spacesum.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/spacepoly.o > > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmmod.o > > CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-custom/zfef.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/spacetensor.o > > CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fegeomf.o > > CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fef.o > > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/baijov.o > > CC arch-linux-c-opt/obj/dm/dt/fe/interface/fegeom.o > > CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/spacesubspace.o > > CC arch-linux-c-opt/obj/dm/dt/fv/interface/fvceed.o > > CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-auto/fvf.o > > CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-custom/zfvf.o > > CC arch-linux-c-opt/obj/dm/dt/fe/impls/composite/fecomposite.o > > CC arch-linux-c-opt/obj/dm/dt/interface/dtprob.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdsf.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtf.o > > CC arch-linux-c-opt/obj/dm/dt/fe/interface/fe.o > > CC arch-linux-c-opt/obj/dm/dt/fv/interface/fv.o > > CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtdsf90.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtfef.o > > CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtf90.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtaltvf.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtf.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtdsf.o > > CC arch-linux-c-opt/obj/dm/dt/fe/impls/basic/febasic.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtprobf.o > > CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtweakformf.o > > CC > arch-linux-c-opt/obj/dm/dt/dualspace/interface/ftn-auto/dualspacef.o > > CC > arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/ftn-auto/dualspacerefinedf.o > > CC arch-linux-c-opt/obj/dm/dt/interface/dtweakform.o > > CC > arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/dualspacerefined.o > > CC arch-linux-c-opt/obj/dm/dt/interface/dtaltv.o > > CC arch-linux-c-opt/obj/dm/dt/interface/dtds.o > > CC > arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/ftn-auto/dspacelagrangef.o > > CC > arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/ftn-auto/dspacesimplef.o > > CC arch-linux-c-opt/obj/dm/label/ftn-custom/zdmlabel.o > > CC arch-linux-c-opt/obj/dm/label/ftn-auto/dmlabelf.o > > CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpibaij.o > > CC > arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/dspacesimple.o > > CC > arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/dmlabelephplex.o > > CC > arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/ftn-auto/dmlabelephplexf.o > > CC > arch-linux-c-opt/obj/dm/label/impls/ephemeral/ftn-auto/dmlabelephf.o > > CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/dmlabeleph.o > > CC arch-linux-c-opt/obj/dm/interface/dmceed.o > > CC arch-linux-c-opt/obj/dm/interface/dlregisdmdm.o > > CC arch-linux-c-opt/obj/dm/interface/dmgenerate.o > > CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/dualspace.o > > CC arch-linux-c-opt/obj/dm/interface/dmget.o > > CC arch-linux-c-opt/obj/dm/interface/dmglvis.o > > CC arch-linux-c-opt/obj/dm/interface/dmcoordinates.o > > CC arch-linux-c-opt/obj/dm/dt/interface/dt.o > > CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmgetf.o > > CC arch-linux-c-opt/obj/dm/interface/dmregall.o > > CC arch-linux-c-opt/obj/dm/interface/dmperiodicity.o > > CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmf.o > > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmcoordinatesf.o > > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmgetf.o > > CC arch-linux-c-opt/obj/dm/interface/dmi.o > > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmperiodicityf.o > > CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmf.o > > CC arch-linux-c-opt/obj/dm/field/interface/dlregisdmfield.o > > CC arch-linux-c-opt/obj/dm/field/interface/dmfieldregi.o > > CC arch-linux-c-opt/obj/dm/field/interface/ftn-auto/dmfieldf.o > > CC arch-linux-c-opt/obj/dm/field/interface/dmfield.o > > CC arch-linux-c-opt/obj/dm/field/impls/shell/dmfieldshell.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/data_ex.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/data_bucket.o > > CC arch-linux-c-opt/obj/dm/field/impls/da/dmfieldda.o > > CC arch-linux-c-opt/obj/dm/label/dmlabel.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/swarm_migrate.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_da.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_sort.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/f90-custom/zswarmf90.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-custom/zswarm.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_plex.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_view.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarm_migratef.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmpicf.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmf.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/swarm.o > > CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic.o > > CC arch-linux-c-opt/obj/dm/impls/forest/ftn-auto/forestf.o > > CC arch-linux-c-opt/obj/dm/impls/shell/ftn-auto/dmshellf.o > > CC arch-linux-c-opt/obj/dm/impls/shell/ftn-custom/zdmshellf.o > > CC > arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/dspacelagrange.o > > CC arch-linux-c-opt/obj/dm/impls/shell/dmshell.o > > CC arch-linux-c-opt/obj/dm/field/impls/ds/dmfieldds.o > > CC arch-linux-c-opt/obj/dm/impls/forest/forest.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stagintern.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stag1d.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stagda.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stag.o > > CC arch-linux-c-opt/obj/dm/interface/dm.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stagstencil.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stagmulti.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexcgns.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexadapt.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexceed.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stagutils.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexcoarsen.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexcheckinterface.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexegads.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexegadslite.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexextrude.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stag2d.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexgenerate.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexfvm.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexfluent.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexexodusii.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexdistribute.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexglvis.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5xdmf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexhpddm.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexindices.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexmed.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexmetric.o > > CC arch-linux-c-opt/obj/dm/impls/stag/stag3d.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexgeometry.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexcreate.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexnatural.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexinterpolate.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexpoint.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexply.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexrefine.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexorient.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexgmsh.o > > CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfpack.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexreorder.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexproject.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexpreallocate.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexsection.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexpartition.o > > CC arch-linux-c-opt/obj/dm/impls/plex/pointqueue.o > > CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexf90.o > > CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexfemf90.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexgeometryf90.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexvtk.o > > CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexsectionf90.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexsfc.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/interface/ftn-auto/plextransformf.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/ftn-auto/plextrextrudef.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/1d/plexref1d.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/plexrefregular.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/ftn-auto/plexrefregularf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexfem.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/bl/plexrefbl.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexvtu.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/plextrextrude.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/alfeld/plexrefalfeld.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/tobox/plexreftobox.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcgnsf.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/filter/plextrfilter.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcheckinterfacef.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcreatef.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexegadsf.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/sbr/plexrefsbr.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexexodusiif.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexdistributef.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfemf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfvmf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgeometryf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgmshf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexindicesf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexinterpolatef.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexnaturalf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexorientf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpartitionf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexmetricf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpointf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexprojectf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexrefinef.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexreorderf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsfcf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plextreef.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsubmeshf.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexcreate.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexdistribute.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexexodusii.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexextrude.o > > CC > arch-linux-c-opt/obj/dm/impls/plex/transform/interface/plextransform.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plex.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexfluent.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexgmsh.o > > CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexsubmesh.o > > CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkcreatef.o > > CC > arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkmonitorf.o > > CC arch-linux-c-opt/obj/dm/impls/network/networkmonitor.o > > CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkf.o > > CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkviewf.o > > CC arch-linux-c-opt/obj/dm/impls/patch/ftn-auto/patchcreatef.o > > CC arch-linux-c-opt/obj/dm/impls/network/networkview.o > > CC arch-linux-c-opt/obj/dm/impls/patch/patchcreate.o > > CC arch-linux-c-opt/obj/dm/impls/network/networkcreate.o > > CC arch-linux-c-opt/obj/dm/impls/composite/f90-custom/zfddaf90.o > > CC arch-linux-c-opt/obj/dm/impls/composite/ftn-auto/packf.o > > CC arch-linux-c-opt/obj/dm/impls/composite/ftn-custom/zfddaf.o > > CC arch-linux-c-opt/obj/dm/impls/patch/patch.o > > CC arch-linux-c-opt/obj/dm/impls/composite/packm.o > > CC arch-linux-c-opt/obj/dm/impls/product/product.o > > CC arch-linux-c-opt/obj/dm/impls/redundant/ftn-auto/dmredundantf.o > > CC arch-linux-c-opt/obj/dm/impls/product/productutils.o > > CC arch-linux-c-opt/obj/dm/impls/sliced/sliced.o > > CC arch-linux-c-opt/obj/dm/impls/redundant/dmredundant.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plexsubmesh.o > > CC arch-linux-c-opt/obj/dm/impls/da/da1.o > > CC arch-linux-c-opt/obj/dm/impls/da/dacorn.o > > CC arch-linux-c-opt/obj/dm/impls/composite/pack.o > > CC arch-linux-c-opt/obj/dm/impls/da/da.o > > CC arch-linux-c-opt/obj/dm/impls/da/dadestroy.o > > CC arch-linux-c-opt/obj/dm/impls/da/dadist.o > > CC arch-linux-c-opt/obj/dm/impls/da/dacreate.o > > CC arch-linux-c-opt/obj/dm/impls/da/dadd.o > > CC arch-linux-c-opt/obj/dm/impls/plex/plextree.o > > CC arch-linux-c-opt/obj/dm/impls/da/da2.o > > CC arch-linux-c-opt/obj/dm/impls/da/dageometry.o > > CC arch-linux-c-opt/obj/dm/impls/da/daghost.o > > CC arch-linux-c-opt/obj/dm/impls/da/dagtona.o > > CC arch-linux-c-opt/obj/dm/impls/da/dagtol.o > > CC arch-linux-c-opt/obj/dm/impls/da/daindex.o > > CC arch-linux-c-opt/obj/dm/impls/da/dagetarray.o > > CC arch-linux-c-opt/obj/dm/impls/da/dagetelem.o > > CC arch-linux-c-opt/obj/dm/impls/da/daltol.o > > CC arch-linux-c-opt/obj/dm/impls/da/dapf.o > > CC arch-linux-c-opt/obj/dm/impls/da/dapreallocate.o > > CC arch-linux-c-opt/obj/dm/impls/da/dareg.o > > CC arch-linux-c-opt/obj/dm/impls/da/dascatter.o > > CC arch-linux-c-opt/obj/dm/impls/da/dalocal.o > > CC arch-linux-c-opt/obj/dm/impls/da/daview.o > > CC arch-linux-c-opt/obj/dm/impls/da/dasub.o > > CC arch-linux-c-opt/obj/dm/impls/da/f90-custom/zda1f90.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda1f.o > > CC arch-linux-c-opt/obj/dm/impls/da/gr1.o > > CC arch-linux-c-opt/obj/dm/impls/network/network.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda2f.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda3f.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdacornf.o > > CC arch-linux-c-opt/obj/dm/impls/da/grglvis.o > > CC arch-linux-c-opt/obj/dm/impls/da/da3.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdagetscatterf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaindexf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdasubf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaghostf.o > > CC arch-linux-c-opt/obj/dm/impls/da/gr2.o > > CC arch-linux-c-opt/obj/dm/impls/da/dainterp.o > > CC arch-linux-c-opt/obj/dm/impls/da/grvtk.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacornf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaviewf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacreatef.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daddf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dageometryf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dadistf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetarrayf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetelemf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtolf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daindexf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtonaf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dalocalf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dainterpf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dapreallocatef.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dasubf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/fddaf.o > > CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/gr1f.o > > CC arch-linux-c-opt/obj/dm/partitioner/interface/partitionerreg.o > > CC > arch-linux-c-opt/obj/dm/partitioner/interface/ftn-custom/zpartitioner.o > > CC > arch-linux-c-opt/obj/dm/partitioner/interface/ftn-auto/partitionerf.o > > CC arch-linux-c-opt/obj/dm/partitioner/impls/chaco/partchaco.o > > CC arch-linux-c-opt/obj/dm/partitioner/impls/gather/partgather.o > > CC > arch-linux-c-opt/obj/dm/partitioner/impls/shell/ftn-auto/partshellf.o > > CC arch-linux-c-opt/obj/dm/partitioner/interface/partitioner.o > > CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/partshell.o > > CC > arch-linux-c-opt/obj/dm/partitioner/impls/ptscotch/partptscotch.o > > CC > arch-linux-c-opt/obj/dm/partitioner/impls/parmetis/partparmetis.o > > CC arch-linux-c-opt/obj/dm/partitioner/impls/matpart/partmatpart.o > > CC arch-linux-c-opt/obj/ksp/pc/interface/pcregis.o > > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpcsetf.o > > CC arch-linux-c-opt/obj/ksp/pc/interface/pcset.o > > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/pcsetf.o > > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpreconf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mat/ftn-auto/pcmatf.o > > CC arch-linux-c-opt/obj/dm/partitioner/impls/simple/partsimple.o > > CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/preconf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mat/pcmat.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/fmg.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgfuncf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/smg.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgadapt.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgfunc.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgfuncf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/wb/ftn-auto/wbf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/gdsw.o > > CC arch-linux-c-opt/obj/ksp/pc/interface/precon.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-auto/bjacobif.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-custom/zbjacobif.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/ftn-auto/pckspf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/none/none.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/pcksp.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-auto/gasmf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-custom/zgasmf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/python/pythonpc.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/python/ftn-custom/zpythonpcf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/sor/ftn-auto/sorf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/ftn-auto/hmgf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/kaczmarz/kaczmarz.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/sor/sor.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/is/ftn-auto/pcisf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/hmg.o > > CC arch-linux-c-opt/obj/dm/impls/da/fdda.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mg.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/bjacobi.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/is/pcis.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/wb/wb.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/is/nn/nn.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/aggf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-custom/zgamgf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/gamgf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/util.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-auto/shellpcf.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/redistribute/ftn-auto/redistributef.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-custom/zshellpcf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/geo.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/gasm.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/shell/shellpc.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/agg.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/classical.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/deflation/ftn-auto/deflationf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/bitmask.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/redistribute.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/tfs.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflation.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/comm.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/gamg.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/ivec.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflationspace.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xxt.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factimpl.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/lu/lu.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/gs.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/ftn-auto/choleskyf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/qr/qr.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xyt.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factor.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-custom/zluf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-auto/factorf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/cholesky.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/icc/icc.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ilu/ilu.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-custom/zbddcf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-auto/bddcf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcnullspace.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-auto/fieldsplitf.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-custom/zfieldsplitf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcscalingbasic.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-custom/zcompositef.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-auto/compositef.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/composite/composite.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcfetidp.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_coarsedm.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/telescope/ftn-auto/telescopef.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcgraph.o > > CC > arch-linux-c-opt/obj/ksp/pc/impls/redundant/ftn-auto/redundantf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/redundant.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/lsc/lsc.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/svd/svd.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddc.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_dmda.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/lmvmpc.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/ftn-auto/lmvmpcf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-auto/asmf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/ftn-auto/jacobif.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-custom/zasmf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/mpi/pcmpi.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/jacobi.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/ftn-auto/galerkinf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/cp/cp.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/galerkin.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/ftn-auto/eisenf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/eisen.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/fieldsplit.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/vpbjacobi/vpbjacobi.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcschurs.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/dlregisksp.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/dmksp.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/pbjacobi/pbjacobi.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/iguess.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/eige.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/itcreate.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/asm/asm.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/itregis.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/itres.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/itcl.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/xmon.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zdmkspf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/ziguess.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitclf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitcreatef.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zxonf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitfuncf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/f90-custom/zitfuncf90.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/eigef.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itclf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iguessf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itcreatef.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/iterativ.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iterativf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itresf.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itfuncf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/kspmatregi.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/ftn-auto/schurmf.o > > CC > arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbadbrdnf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/patch/pcpatch.o > > CC arch-linux-c-opt/obj/ksp/ksp/interface/itfunc.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmimpl.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmutils.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/dmproject.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbadbrdn.o > > CC > arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbrdnf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/ftn-auto/dfpf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/schurm.o > > CC > arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/ftn-auto/diagbrdnf.o > > CC > arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/badbrdnf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/dfp.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/brdnf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/ftn-auto/lmvmutilsf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbrdn.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/brdn.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/badbrdn.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/ftn-auto/bfgsf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/ftn-auto/sr1f.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/diagbrdn.o > > CC > arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/ftn-auto/fischerf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/ftn-auto/dmprojectf.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/bfgs.o > > CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/sr1.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmpre.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cgs/cgs.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog2.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/lcd/lcd.o > > CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/fischer.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres2.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmreig.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmpref.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-custom/zgmres2f.o > > CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/pod/pod.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmresf.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-auto/modpcff.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/modpcf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/lgmres/lgmres.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pgmres/pgmres.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/ftn-auto/pipefgmresf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/fgmres.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresleja.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/tsirm/tsirm.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresdeflation.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/ftn-auto/lsqrf.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/pipefgmres.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/python/pythonksp.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/python/ftn-custom/zpythonkspf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/lsqr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/ftn-auto/bcgslf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresorthog.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bicg/bicg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/dgmres/dgmres.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/ftn-auto/minresf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgtype.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/ftn-auto/gltrf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgeig.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgls.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmres.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/bcgsl.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg/pipecg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/ftn-auto/cgtypef.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/stcg/stcg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/minres.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecgrr/pipecgrr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgne/cgne.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/groppcg/groppcg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/gltr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/ftn-auto/fcgf.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/ftn-auto/pipefcgf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipeprcg/pipeprcg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/nash/nash.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg2/pipecg2.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/ftn-auto/richscalef.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/richscale.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipelcg/pipelcg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/ftn-auto/qcgf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/fcg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/pipefcg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/betas.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/tfqmr/tfqmr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/rich.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/ftn-auto/chebyf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/qcg.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/bcgs.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/qmrcgs/qmrcgs.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgs/fbcgs.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/ftn-auto/fetidpf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/symmlq/symmlq.o > > CC > arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/ftn-auto/pipegcrf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/ftn-auto/gcrf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/pipebcgs/pipebcgs.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgsr/fbcgsr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/gcr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/preonly/preonly.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/pipecr/pipecr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/cheby.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/cr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/tcqmr/tcqmr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/pipegcr.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/ibcgs/ibcgs.o > > CC arch-linux-c-opt/obj/snes/utils/dmlocalsnes.o > > CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmdasnesf.o > > CC arch-linux-c-opt/obj/snes/utils/convest.o > > CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmlocalsnesf.o > > CC arch-linux-c-opt/obj/snes/utils/dmsnes.o > > CC arch-linux-c-opt/obj/snes/utils/dmdasnes.o > > CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmsnesf.o > > CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/fetidp.o > > CC arch-linux-c-opt/obj/snes/utils/ftn-auto/convestf.o > > CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmadaptf.o > > CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmplexsnesf.o > > CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearchregi.o > > CC > arch-linux-c-opt/obj/snes/linesearch/interface/ftn-custom/zlinesearchf.o > > CC > arch-linux-c-opt/obj/snes/linesearch/interface/ftn-auto/linesearchf.o > > CC > arch-linux-c-opt/obj/snes/linesearch/impls/bt/ftn-auto/linesearchbtf.o > > CC > arch-linux-c-opt/obj/snes/linesearch/impls/shell/ftn-custom/zlinesearchshellf.o > > CC > arch-linux-c-opt/obj/snes/linesearch/impls/shell/linesearchshell.o > > CC arch-linux-c-opt/obj/snes/utils/dmadapt.o > > CC > arch-linux-c-opt/obj/snes/linesearch/impls/basic/linesearchbasic.o > > CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearch.o > > CC arch-linux-c-opt/obj/snes/linesearch/impls/cp/linesearchcp.o > > CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/linesearchbt.o > > CC arch-linux-c-opt/obj/snes/interface/dlregissnes.o > > CC > arch-linux-c-opt/obj/snes/linesearch/impls/nleqerr/linesearchnleqerr.o > > CC arch-linux-c-opt/obj/snes/linesearch/impls/l2/linesearchl2.o > > CC arch-linux-c-opt/obj/snes/interface/snesj2.o > > CC arch-linux-c-opt/obj/snes/interface/snesj.o > > CC arch-linux-c-opt/obj/snes/interface/snesregi.o > > CC arch-linux-c-opt/obj/snes/interface/snespc.o > > CC arch-linux-c-opt/obj/snes/interface/snesob.o > > CC arch-linux-c-opt/obj/snes/interface/noise/snesdnest.o > > CC arch-linux-c-opt/obj/snes/interface/f90-custom/zsnesf90.o > > CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snespcf.o > > CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snesf.o > > CC arch-linux-c-opt/obj/snes/interface/noise/snesmfj2.o > > CC arch-linux-c-opt/obj/snes/interface/noise/snesnoise.o > > CC arch-linux-c-opt/obj/snes/interface/snesut.o > > CC arch-linux-c-opt/obj/snes/impls/qn/ftn-auto/qnf.o > > CC arch-linux-c-opt/obj/snes/utils/dmplexsnes.o > > CC arch-linux-c-opt/obj/snes/interface/ftn-custom/zsnesf.o > > CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasf.o > > CC arch-linux-c-opt/obj/snes/impls/fas/fasgalerkin.o > > CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasgalerkinf.o > > CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcprivate.o > > CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasfuncf.o > > CC arch-linux-c-opt/obj/snes/impls/qn/qn.o > > CC arch-linux-c-opt/obj/snes/impls/ntrdc/ftn-auto/ntrdcf.o > > CC arch-linux-c-opt/obj/snes/impls/shell/snesshell.o > > CC arch-linux-c-opt/obj/snes/impls/shell/ftn-custom/zsnesshellf.o > > CC arch-linux-c-opt/obj/snes/impls/shell/ftn-auto/snesshellf.o > > CC arch-linux-c-opt/obj/snes/impls/fas/fasfunc.o > > CC arch-linux-c-opt/obj/snes/impls/richardson/snesrichardson.o > > CC > arch-linux-c-opt/obj/snes/impls/composite/ftn-auto/snescompositef.o > > CC arch-linux-c-opt/obj/snes/impls/gs/ftn-auto/snesgsf.o > > CC arch-linux-c-opt/obj/snes/impls/ntrdc/ntrdc.o > > CC arch-linux-c-opt/obj/snes/impls/gs/gssecant.o > > CC arch-linux-c-opt/obj/snes/impls/gs/snesgs.o > > CC arch-linux-c-opt/obj/snes/impls/tr/ftn-auto/trf.o > > CC arch-linux-c-opt/obj/snes/impls/fas/fas.o > > CC arch-linux-c-opt/obj/snes/impls/vi/ss/ftn-auto/vissf.o > > CC arch-linux-c-opt/obj/snes/impls/vi/ftn-auto/vif.o > > CC arch-linux-c-opt/obj/snes/impls/patch/snespatch.o > > CC arch-linux-c-opt/obj/snes/impls/vi/rs/ftn-auto/virsf.o > > CC > arch-linux-c-opt/obj/snes/impls/multiblock/ftn-auto/multiblockf.o > > CC arch-linux-c-opt/obj/snes/impls/ksponly/ksponly.o > > CC arch-linux-c-opt/obj/snes/impls/vi/ss/viss.o > > CC arch-linux-c-opt/obj/snes/impls/vi/vi.o > > CC arch-linux-c-opt/obj/snes/impls/tr/tr.o > > CC arch-linux-c-opt/obj/snes/impls/composite/snescomposite.o > > CC arch-linux-c-opt/obj/snes/impls/nasm/aspin.o > > CC arch-linux-c-opt/obj/snes/impls/vi/rs/virs.o > > CC arch-linux-c-opt/obj/snes/impls/nasm/ftn-auto/nasmf.o > > CC arch-linux-c-opt/obj/snes/impls/ngmres/ftn-auto/snesngmresf.o > > CC arch-linux-c-opt/obj/snes/impls/multiblock/multiblock.o > > CC arch-linux-c-opt/obj/snes/impls/ngmres/anderson.o > > CC arch-linux-c-opt/obj/snes/impls/python/ftn-custom/zpythonsf.o > > CC arch-linux-c-opt/obj/snes/impls/python/pythonsnes.o > > CC arch-linux-c-opt/obj/snes/impls/ngmres/ngmresfunc.o > > CC arch-linux-c-opt/obj/snes/interface/snes.o > > CC arch-linux-c-opt/obj/snes/impls/ncg/ftn-auto/snesncgf.o > > CC arch-linux-c-opt/obj/snes/impls/ngmres/snesngmres.o > > CC arch-linux-c-opt/obj/snes/impls/ls/ls.o > > CC arch-linux-c-opt/obj/snes/mf/ftn-auto/snesmfjf.o > > CC arch-linux-c-opt/obj/snes/mf/snesmfj.o > > CC arch-linux-c-opt/obj/snes/impls/ncg/snesncg.o > > CC arch-linux-c-opt/obj/snes/impls/nasm/nasm.o > > CC arch-linux-c-opt/obj/snes/impls/ms/ms.o > > CC arch-linux-c-opt/obj/ts/utils/dmnetworkts.o > > CC > arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-custom/zlandaucreate.o > > CC arch-linux-c-opt/obj/ts/utils/dmdats.o > > CC arch-linux-c-opt/obj/ts/utils/dmlocalts.o > > CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-auto/plexlandf.o > > CC arch-linux-c-opt/obj/ts/event/ftn-auto/tseventf.o > > CC arch-linux-c-opt/obj/ts/utils/ftn-auto/dmplextsf.o > > CC arch-linux-c-opt/obj/ts/utils/dmplexts.o > > CC arch-linux-c-opt/obj/ts/utils/tsconvest.o > > CC arch-linux-c-opt/obj/ts/utils/dmts.o > > CC > arch-linux-c-opt/obj/ts/trajectory/interface/ftn-custom/ztrajf.o > > CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-auto/trajf.o > > CC arch-linux-c-opt/obj/ts/trajectory/utils/reconstruct.o > > CC > arch-linux-c-opt/obj/ts/trajectory/impls/singlefile/singlefile.o > > CC > arch-linux-c-opt/obj/ts/trajectory/impls/visualization/trajvisualization.o > > CC arch-linux-c-opt/obj/ts/trajectory/impls/basic/trajbasic.o > > CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-custom/ztsadaptf.o > > CC arch-linux-c-opt/obj/ts/event/tsevent.o > > CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-auto/tsadaptf.o > > CC arch-linux-c-opt/obj/ts/trajectory/interface/traj.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/history/adapthist.o > > CC > arch-linux-c-opt/obj/ts/adapt/impls/history/ftn-auto/adapthistf.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/none/adaptnone.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/glee/adaptglee.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/basic/adaptbasic.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/cfl/adaptcfl.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-custom/zadaptdspf.o > > CC arch-linux-c-opt/obj/ts/adapt/interface/tsadapt.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-auto/adaptdspf.o > > CC arch-linux-c-opt/obj/ts/interface/tscreate.o > > CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/adaptdsp.o > > CC arch-linux-c-opt/obj/ts/interface/dlregists.o > > CC arch-linux-c-opt/obj/ts/trajectory/impls/memory/trajmemory.o > > CC arch-linux-c-opt/obj/ts/interface/tsreg.o > > CC arch-linux-c-opt/obj/ts/interface/tseig.o > > CC arch-linux-c-opt/obj/ts/interface/tshistory.o > > CC arch-linux-c-opt/obj/ts/interface/tsregall.o > > CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztscreatef.o > > CC arch-linux-c-opt/obj/ts/interface/tsrhssplit.o > > CC arch-linux-c-opt/obj/ts/interface/sensitivity/ftn-auto/tssenf.o > > CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsregf.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-custom/zrkf.o > > CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsf.o > > CC arch-linux-c-opt/obj/ts/interface/ftn-auto/tsf.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-auto/rkf.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-custom/zsspf.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-auto/sspf.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/euler/euler.o > > CC arch-linux-c-opt/obj/ts/interface/sensitivity/tssen.o > > CC arch-linux-c-opt/obj/ts/interface/tsmon.o > > CC arch-linux-c-opt/obj/ts/impls/rosw/ftn-custom/zroswf.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/mrk.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ssp.o > > CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-auto/arkimexf.o > > CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-custom/zarkimexf.o > > CC arch-linux-c-opt/obj/ts/impls/pseudo/ftn-auto/posindepf.o > > CC arch-linux-c-opt/obj/ts/impls/pseudo/posindep.o > > CC arch-linux-c-opt/obj/ts/impls/python/pythonts.o > > CC > arch-linux-c-opt/obj/ts/impls/symplectic/basicsymplectic/basicsymplectic.o > > CC arch-linux-c-opt/obj/ts/impls/explicit/rk/rk.o > > CC arch-linux-c-opt/obj/ts/impls/python/ftn-custom/zpythontf.o > > CC arch-linux-c-opt/obj/ts/impls/eimex/eimex.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/theta/ftn-auto/thetaf.o > > CC arch-linux-c-opt/obj/ts/impls/mimex/mimex.o > > CC arch-linux-c-opt/obj/ts/impls/rosw/rosw.o > > CC arch-linux-c-opt/obj/ts/impls/glee/glee.o > > CC arch-linux-c-opt/obj/ts/interface/ts.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glleadapt.o > > CC arch-linux-c-opt/obj/ts/impls/arkimex/arkimex.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/irk/irk.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha1f.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha1.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha2f.o > > CC > arch-linux-c-opt/obj/ts/impls/implicit/discgrad/ftn-auto/tsdiscgradf.o > > CC arch-linux-c-opt/obj/ts/impls/bdf/ftn-auto/bdff.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha2.o > > CC arch-linux-c-opt/obj/ts/characteristic/interface/mocregis.o > > CC > arch-linux-c-opt/obj/ts/characteristic/interface/ftn-auto/characteristicf.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/tsdiscgrad.o > > CC arch-linux-c-opt/obj/ts/characteristic/interface/slregis.o > > CC arch-linux-c-opt/obj/ts/impls/multirate/mprk.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/theta/theta.o > > CC arch-linux-c-opt/obj/ts/characteristic/impls/da/slda.o > > CC arch-linux-c-opt/obj/ts/impls/bdf/bdf.o > > CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/ftn-auto/blmvmf.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bqnls/bqnls.o > > CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/blmvm.o > > CC arch-linux-c-opt/obj/tao/bound/utils/isutil.o > > CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/plexland.o > > CC arch-linux-c-opt/obj/tao/bound/impls/tron/tron.o > > CC > arch-linux-c-opt/obj/ts/characteristic/interface/characteristic.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnls.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntl.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntr.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnkls.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktl.o > > CC arch-linux-c-opt/obj/tao/pde_constrained/impls/lcl/lcl.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnk.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bncg/bncg.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktr.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/ftn-auto/bqnkf.o > > CC arch-linux-c-opt/obj/tao/shell/ftn-auto/taoshellf.o > > CC arch-linux-c-opt/obj/tao/shell/taoshell.o > > CC arch-linux-c-opt/obj/tao/matrix/submatfree.o > > CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnk.o > > CC arch-linux-c-opt/obj/tao/matrix/adamat.o > > CC arch-linux-c-opt/obj/tao/quadratic/impls/gpcg/gpcg.o > > CC > arch-linux-c-opt/obj/tao/constrained/impls/almm/ftn-auto/almmutilsf.o > > CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almmutils.o > > CC arch-linux-c-opt/obj/tao/quadratic/impls/bqpip/bqpip.o > > CC > arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-auto/admmf.o > > CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glle.o > > CC > arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-custom/zadmmf.o > > CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssls.o > > CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssfls.o > > CC > arch-linux-c-opt/obj/tao/linesearch/interface/dlregis_taolinesearch.o > > CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssils.o > > CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almm.o > > CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asfls.o > > CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asils.o > > CC > arch-linux-c-opt/obj/tao/linesearch/interface/ftn-auto/taolinesearchf.o > > CC > arch-linux-c-opt/obj/tao/linesearch/interface/ftn-custom/ztaolinesearchf.o > > CC arch-linux-c-opt/obj/tao/constrained/impls/admm/admm.o > > CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/ipm.o > > CC > arch-linux-c-opt/obj/tao/linesearch/impls/gpcglinesearch/gpcglinesearch.o > > CC arch-linux-c-opt/obj/tao/linesearch/impls/unit/unit.o > > CC > arch-linux-c-opt/obj/tao/linesearch/impls/morethuente/morethuente.o > > CC arch-linux-c-opt/obj/tao/snes/taosnes.o > > CC arch-linux-c-opt/obj/tao/linesearch/interface/taolinesearch.o > > CC arch-linux-c-opt/obj/tao/linesearch/impls/armijo/armijo.o > > CC > arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-auto/brgnf.o > > CC arch-linux-c-opt/obj/tao/linesearch/impls/owarmijo/owarmijo.o > > CC > arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-custom/zbrgnf.o > > CC arch-linux-c-opt/obj/tao/interface/dlregistao.o > > CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/gqt.o > > CC arch-linux-c-opt/obj/tao/interface/fdiff.o > > CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/brgn.o > > CC arch-linux-c-opt/obj/tao/interface/taosolver_bounds.o > > CC arch-linux-c-opt/obj/tao/interface/taosolverregi.o > > CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/pdipm.o > > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_boundsf.o > > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_hjf.o > > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_fgf.o > > CC arch-linux-c-opt/obj/tao/interface/taosolver_fg.o > > CC arch-linux-c-opt/obj/tao/python/pythontao.o > > CC arch-linux-c-opt/obj/tao/python/ftn-custom/zpythontaof.o > > CC arch-linux-c-opt/obj/tao/interface/taosolver_hj.o > > CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolverf.o > > CC arch-linux-c-opt/obj/tao/interface/ftn-custom/ztaosolverf.o > > CC arch-linux-c-opt/obj/tao/unconstrained/impls/lmvm/lmvm.o > > CC arch-linux-c-opt/obj/tao/interface/taosolver.o > > CC arch-linux-c-opt/obj/tao/unconstrained/impls/owlqn/owlqn.o > > CC > arch-linux-c-opt/obj/tao/unconstrained/impls/neldermead/neldermead.o > > CC arch-linux-c-opt/obj/tao/util/ftn-auto/tao_utilf.o > > CC arch-linux-c-opt/obj/tao/unconstrained/impls/cg/taocg.o > > FC arch-linux-c-opt/obj/sys/classes/bag/f2003-src/fsrc/bagenum.o > > FC arch-linux-c-opt/obj/sys/objects/f2003-src/fsrc/optionenum.o > > CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntr/ntr.o > > CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntl/ntl.o > > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmswarmmod.o > > CC arch-linux-c-opt/obj/tao/unconstrained/impls/bmrm/bmrm.o > > CC arch-linux-c-opt/obj/tao/unconstrained/impls/nls/nls.o > > CC arch-linux-c-opt/obj/tao/util/tao_util.o > > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmdamod.o > > CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/pounders.o > > FC arch-linux-c-opt/obj/dm/f90-mod/petscdmplexmod.o > > FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspdefmod.o > > FC arch-linux-c-opt/obj/ksp/f90-mod/petscpcmod.o > > FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspmod.o > > FC arch-linux-c-opt/obj/snes/f90-mod/petscsnesmod.o > > FC arch-linux-c-opt/obj/ts/f90-mod/petsctsmod.o > > FC arch-linux-c-opt/obj/tao/f90-mod/petsctaomod.o > > CLINKER arch-linux-c-opt/lib/libpetsc.so.3.019.2 > > *** Building SLEPc *** > > Checking environment... done > > Checking PETSc installation... done > > Generating Fortran stubs... done > > Checking LAPACK library... done > > Checking SCALAPACK... done > > Writing various configuration files... done > > > > > ================================================================================ > > SLEPc Configuration > > > ================================================================================ > > > > SLEPc directory: > > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > > It is a git repository on branch: remotes/origin/jose/test-petsc-branch~2 > > SLEPc prefix directory: > > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt > > PETSc directory: > > /home/vrkaka/SLlibs/petsc > > It is a git repository on branch: main > > Architecture "arch-linux-c-opt" with double precision real numbers > > SCALAPACK from SCALAPACK linked by PETSc > > > > > xxx==========================================================================xxx > > Configure stage complete. Now build the SLEPc library with: > > make > SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt > > > xxx==========================================================================xxx > > > > ========================================== > > Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:20:55 +0300 > > Machine characteristics: Linux WKS-101259-LT > 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 > x86_64 x86_64 x86_64 GNU/Linux > > ----------------------------------------- > > Using SLEPc directory: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > > Using PETSc directory: /home/vrkaka/SLlibs/petsc > > Using PETSc arch: arch-linux-c-opt > > ----------------------------------------- > > SLEPC_VERSION_RELEASE 0 > > SLEPC_VERSION_MAJOR 3 > > SLEPC_VERSION_MINOR 19 > > SLEPC_VERSION_SUBMINOR 0 > > SLEPC_VERSION_DATE "unknown" > > SLEPC_VERSION_GIT "unknown" > > SLEPC_VERSION_DATE_GIT "unknown" > > ----------------------------------------- > > Using SLEPc configure options: > --prefix=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt > > Using SLEPc configuration flags: > > #define SLEPC_PETSC_DIR "/home/vrkaka/SLlibs/petsc" > > #define SLEPC_PETSC_ARCH "arch-linux-c-opt" > > #define SLEPC_DIR > "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc" > > #define SLEPC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" > > #define SLEPC_VERSION_GIT "v3.19.0-34-ga2e6dffce" > > #define SLEPC_VERSION_DATE_GIT "2023-05-09 07:30:59 +0000" > > #define SLEPC_VERSION_BRANCH_GIT "remotes/origin/jose/test-petsc-branch~2" > > #define SLEPC_HAVE_SCALAPACK 1 > > #define SLEPC_SCALAPACK_HAVE_UNDERSCORE 1 > > #define SLEPC_HAVE_PACKAGES ":scalapack:" > > ----------------------------------------- > > PETSC_VERSION_RELEASE 0 > > PETSC_VERSION_MAJOR 3 > > PETSC_VERSION_MINOR 19 > > PETSC_VERSION_SUBMINOR 2 > > PETSC_VERSION_DATE "unknown" > > PETSC_VERSION_GIT "unknown" > > PETSC_VERSION_DATE_GIT "unknown" > > ----------------------------------------- > > Using PETSc configure options: --with-openmp --download-mpich > --download-mumps --download-scalapack --download-openblas --download-slepc > --download-metis --download-med --download-hdf5 --download-zlib > --download-netcdf --download-pnetcdf --download-exodusii > --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 > FOPTFLAGS=-O3 > > Using PETSc configuration flags: > > #define PETSC_ARCH "arch-linux-c-opt" > > #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) > > #define PETSC_BLASLAPACK_UNDERSCORE 1 > > #define PETSC_CLANGUAGE_C 1 > > #define PETSC_CXX_RESTRICT __restrict > > #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) > > #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) > > #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) > > #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) > > #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" > > #define PETSC_DIR_SEPARATOR '/' > > #define PETSC_FORTRAN_CHARLEN_T size_t > > #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 > > #define PETSC_FUNCTION_NAME_C __func__ > > #define PETSC_FUNCTION_NAME_CXX __func__ > > #define PETSC_HAVE_ACCESS 1 > > #define PETSC_HAVE_ATOLL 1 > > #define PETSC_HAVE_ATTRIBUTEALIGNED 1 > > #define PETSC_HAVE_BUILTIN_EXPECT 1 > > #define PETSC_HAVE_BZERO 1 > > #define PETSC_HAVE_C99_COMPLEX 1 > > #define PETSC_HAVE_CLOCK 1 > > #define PETSC_HAVE_CXX 1 > > #define PETSC_HAVE_CXX_ATOMIC 1 > > #define PETSC_HAVE_CXX_COMPLEX 1 > > #define PETSC_HAVE_CXX_COMPLEX_FIX 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX11 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX14 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX17 1 > > #define PETSC_HAVE_CXX_DIALECT_CXX20 1 > > #define PETSC_HAVE_DLADDR 1 > > #define PETSC_HAVE_DLCLOSE 1 > > #define PETSC_HAVE_DLERROR 1 > > #define PETSC_HAVE_DLFCN_H 1 > > #define PETSC_HAVE_DLOPEN 1 > > #define PETSC_HAVE_DLSYM 1 > > #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 > > #define PETSC_HAVE_DRAND48 1 > > #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 > > #define PETSC_HAVE_ERF 1 > > #define PETSC_HAVE_EXECUTABLE_EXPORT 1 > > #define PETSC_HAVE_EXODUSII 1 > > #define PETSC_HAVE_FCNTL_H 1 > > #define PETSC_HAVE_FENV_H 1 > > #define PETSC_HAVE_FE_VALUES 1 > > #define PETSC_HAVE_FLOAT_H 1 > > #define PETSC_HAVE_FORK 1 > > #define PETSC_HAVE_FORTRAN 1 > > #define PETSC_HAVE_FORTRAN_FLUSH 1 > > #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 > > #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 > > #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 > > #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 > > #define PETSC_HAVE_GETCWD 1 > > #define PETSC_HAVE_GETDOMAINNAME 1 > > #define PETSC_HAVE_GETHOSTBYNAME 1 > > #define PETSC_HAVE_GETHOSTNAME 1 > > #define PETSC_HAVE_GETPAGESIZE 1 > > #define PETSC_HAVE_GETRUSAGE 1 > > #define PETSC_HAVE_HDF5 1 > > #define PETSC_HAVE_IMMINTRIN_H 1 > > #define PETSC_HAVE_INTTYPES_H 1 > > #define PETSC_HAVE_ISINF 1 > > #define PETSC_HAVE_ISNAN 1 > > #define PETSC_HAVE_ISNORMAL 1 > > #define PETSC_HAVE_LGAMMA 1 > > #define PETSC_HAVE_LOG2 1 > > #define PETSC_HAVE_LSEEK 1 > > #define PETSC_HAVE_MALLOC_H 1 > > #define PETSC_HAVE_MED 1 > > #define PETSC_HAVE_MEMMOVE 1 > > #define PETSC_HAVE_METIS 1 > > #define PETSC_HAVE_MKSTEMP 1 > > #define PETSC_HAVE_MMAP 1 > > #define PETSC_HAVE_MPICH 1 > > #define PETSC_HAVE_MPICH_NUMVERSION 40101300 > > #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 > > #define PETSC_HAVE_MPIIO 1 > > #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 > > #define PETSC_HAVE_MPI_COMBINER_DUP 1 > > #define PETSC_HAVE_MPI_COMBINER_NAMED 1 > > #define PETSC_HAVE_MPI_F90MODULE 1 > > #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 > > #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 > > #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 > > #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 > > #define PETSC_HAVE_MPI_INIT_THREAD 1 > > #define PETSC_HAVE_MPI_INT64_T 1 > > #define PETSC_HAVE_MPI_LARGE_COUNT 1 > > #define PETSC_HAVE_MPI_LONG_DOUBLE 1 > > #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 > > #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 > > #define PETSC_HAVE_MPI_ONE_SIDED 1 > > #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 > > #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 > > #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 > > #define PETSC_HAVE_MPI_RGET 1 > > #define PETSC_HAVE_MPI_WIN_CREATE 1 > > #define PETSC_HAVE_MUMPS 1 > > #define PETSC_HAVE_NANOSLEEP 1 > > #define PETSC_HAVE_NETCDF 1 > > #define PETSC_HAVE_NETDB_H 1 > > #define PETSC_HAVE_NETINET_IN_H 1 > > #define PETSC_HAVE_OPENBLAS 1 > > #define PETSC_HAVE_OPENMP 1 > > #define PETSC_HAVE_PACKAGES > ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" > > #define PETSC_HAVE_PNETCDF 1 > > #define PETSC_HAVE_POPEN 1 > > #define PETSC_HAVE_POSIX_MEMALIGN 1 > > #define PETSC_HAVE_PTHREAD 1 > > #define PETSC_HAVE_PWD_H 1 > > #define PETSC_HAVE_RAND 1 > > #define PETSC_HAVE_READLINK 1 > > #define PETSC_HAVE_REALPATH 1 > > #define PETSC_HAVE_REAL___FLOAT128 1 > > #define PETSC_HAVE_REGEX 1 > > #define PETSC_HAVE_RTLD_GLOBAL 1 > > #define PETSC_HAVE_RTLD_LAZY 1 > > #define PETSC_HAVE_RTLD_LOCAL 1 > > #define PETSC_HAVE_RTLD_NOW 1 > > #define PETSC_HAVE_SCALAPACK 1 > > #define PETSC_HAVE_SETJMP_H 1 > > #define PETSC_HAVE_SLEEP 1 > > #define PETSC_HAVE_SLEPC 1 > > #define PETSC_HAVE_SNPRINTF 1 > > #define PETSC_HAVE_SOCKET 1 > > #define PETSC_HAVE_SOWING 1 > > #define PETSC_HAVE_SO_REUSEADDR 1 > > #define PETSC_HAVE_STDATOMIC_H 1 > > #define PETSC_HAVE_STDINT_H 1 > > #define PETSC_HAVE_STRCASECMP 1 > > #define PETSC_HAVE_STRINGS_H 1 > > #define PETSC_HAVE_STRUCT_SIGACTION 1 > > #define PETSC_HAVE_SYS_PARAM_H 1 > > #define PETSC_HAVE_SYS_PROCFS_H 1 > > #define PETSC_HAVE_SYS_RESOURCE_H 1 > > #define PETSC_HAVE_SYS_SOCKET_H 1 > > #define PETSC_HAVE_SYS_TIMES_H 1 > > #define PETSC_HAVE_SYS_TIME_H 1 > > #define PETSC_HAVE_SYS_TYPES_H 1 > > #define PETSC_HAVE_SYS_UTSNAME_H 1 > > #define PETSC_HAVE_SYS_WAIT_H 1 > > #define PETSC_HAVE_TAU_PERFSTUBS 1 > > #define PETSC_HAVE_TGAMMA 1 > > #define PETSC_HAVE_TIME 1 > > #define PETSC_HAVE_TIME_H 1 > > #define PETSC_HAVE_UNAME 1 > > #define PETSC_HAVE_UNISTD_H 1 > > #define PETSC_HAVE_USLEEP 1 > > #define PETSC_HAVE_VA_COPY 1 > > #define PETSC_HAVE_VSNPRINTF 1 > > #define PETSC_HAVE_XMMINTRIN_H 1 > > #define PETSC_HDF5_HAVE_PARALLEL 1 > > #define PETSC_HDF5_HAVE_ZLIB 1 > > #define PETSC_INTPTR_T intptr_t > > #define PETSC_INTPTR_T_FMT "#" PRIxPTR > > #define PETSC_IS_COLORING_MAX USHRT_MAX > > #define PETSC_IS_COLORING_VALUE_TYPE short > > #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 > > #define PETSC_LEVEL1_DCACHE_LINESIZE 64 > > #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" > > #define PETSC_MAX_PATH_LEN 4096 > > #define PETSC_MEMALIGN 16 > > #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch > -Wno-stringop-overflow -O3 > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath > -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags > -lmpi" > > #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT > > #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" > > #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA > > #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 > > #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 > > #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 > > #define PETSC_PYTHON_EXE "/usr/bin/python3" > > #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) > > #define PETSC_REPLACE_DIR_SEPARATOR '\\' > > #define PETSC_SIGNAL_CAST > > #define PETSC_SIZEOF_INT 4 > > #define PETSC_SIZEOF_LONG 8 > > #define PETSC_SIZEOF_LONG_LONG 8 > > #define PETSC_SIZEOF_SIZE_T 8 > > #define PETSC_SIZEOF_VOID_P 8 > > #define PETSC_SLSUFFIX "so" > > #define PETSC_UINTPTR_T uintptr_t > > #define PETSC_UINTPTR_T_FMT "#" PRIxPTR > > #define PETSC_UNUSED __attribute((unused)) > > #define PETSC_USE_AVX512_KERNELS 1 > > #define PETSC_USE_BACKWARD_LOOP 1 > > #define PETSC_USE_CTABLE 1 > > #define PETSC_USE_DMLANDAU_2D 1 > > #define PETSC_USE_INFO 1 > > #define PETSC_USE_ISATTY 1 > > #define PETSC_USE_LOG 1 > > #define PETSC_USE_MALLOC_COALESCED 1 > > #define PETSC_USE_PROC_FOR_SIZE 1 > > #define PETSC_USE_REAL_DOUBLE 1 > > #define PETSC_USE_SHARED_LIBRARIES 1 > > #define PETSC_USE_SINGLE_LIBRARY 1 > > #define PETSC_USE_SOCKET_VIEWER 1 > > #define PETSC_USE_VISIBILITY_C 1 > > #define PETSC_USE_VISIBILITY_CXX 1 > > #define PETSC_USING_64BIT_PTR 1 > > #define PETSC_USING_F2003 1 > > #define PETSC_USING_F90FREEFORM 1 > > #define PETSC_VERSION_BRANCH_GIT "main" > > #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" > > #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" > > #define PETSC__BSD_SOURCE 1 > > #define PETSC__DEFAULT_SOURCE 1 > > #define PETSC__GNU_SOURCE 1 > > ----------------------------------------- > > Using C/C++ include paths: > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include > -I/home/vrkaka/SLlibs/petsc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o > .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch > -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 > > Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx > -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden > -O3 -std=gnu++20 > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include > -I/home/vrkaka/SLlibs/petsc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > > Using Fortran include/module paths: > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include > -I/home/vrkaka/SLlibs/petsc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > Using Fortran compile: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall > -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch > -Wno-unused-dummy-argument -O3 -fopenmp > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include > -I/home/vrkaka/SLlibs/petsc/include > -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp > > ----------------------------------------- > > Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc > > Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector > -fvisibility=hidden -O3 > > Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 > > Using Fortran flags: -fopenmp -Wall -ffree-line-length-none > -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 > > ----------------------------------------- > > Using libraries: > -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib > -lslepc -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 > -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord > -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC > -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi > -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ > > ------------------------------------------ > > Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec > > ------------------------------------------ > > Using MAKE: /usr/bin/gmake > > Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: > --no-print-directory -- PETSC_DIR=/home/vrkaka/SLlibs/petsc > PETSC_ARCH=arch-linux-c-opt > SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > > ========================================== > > /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 > --output-sync=recurse V= slepc_libs > > /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegen.py > --petsc-arch=arch-linux-c-opt > --pkg-dir=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > --pkg-name=slepc --pkg-pkgs=sys,eps,svd,pep,nep,mfn,lme > --pkg-arch=arch-linux-c-opt > > CC arch-linux-c-opt/obj/sys/ftn-auto/slepcscf.o > > CC arch-linux-c-opt/obj/sys/ftn-auto/slepcinitf.o > > CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_startf.o > > CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_start.o > > CC arch-linux-c-opt/obj/sys/dlregisslepc.o > > CC arch-linux-c-opt/obj/sys/slepcutil.o > > CC arch-linux-c-opt/obj/sys/slepcinit.o > > CC arch-linux-c-opt/obj/sys/slepcsc.o > > CC arch-linux-c-opt/obj/sys/slepccontour.o > > Use "/usr/bin/gmake V=1" to see verbose compile lines, "/usr/bin/gmake > V=0" to suppress. > > FC arch-linux-c-opt/obj/sys/f90-mod/slepcsysmod.o > > CC arch-linux-c-opt/obj/sys/vec/ftn-auto/vecutilf.o > > CC arch-linux-c-opt/obj/sys/ftn-custom/zslepcutil.o > > CC arch-linux-c-opt/obj/sys/vec/pool.o > > CC arch-linux-c-opt/obj/sys/mat/ftn-auto/matutilf.o > > CC arch-linux-c-opt/obj/sys/vec/vecutil.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-custom/zpolygon.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-auto/rgpolygonf.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/ring/ftn-auto/rgringf.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-custom/zellipse.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-auto/rgellipsef.o > > CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/rgellipse.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-custom/zinterval.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-auto/rgintervalf.o > > CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/rgring.o > > CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgregis.o > > CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/rgpolygon.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-auto/rgbasicf.o > > CC arch-linux-c-opt/obj/sys/mat/matutil.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-custom/zrgf.o > > CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgbasic.o > > CC > arch-linux-c-opt/obj/sys/classes/fn/impls/phi/ftn-auto/fnphif.o > > CC > arch-linux-c-opt/obj/sys/classes/rg/impls/interval/rginterval.o > > CC > arch-linux-c-opt/obj/sys/classes/fn/impls/combine/ftn-auto/fncombinef.o > > CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/fnphi.o > > CC arch-linux-c-opt/obj/sys/vec/veccomp.o > > CC > arch-linux-c-opt/obj/sys/classes/fn/impls/rational/ftn-custom/zrational.o > > CC arch-linux-c-opt/obj/sys/classes/fn/impls/sqrt/fnsqrt.o > > CC arch-linux-c-opt/obj/sys/classes/fn/impls/fnutil.o > > CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/fncombine.o > > CC arch-linux-c-opt/obj/sys/classes/fn/impls/log/fnlog.o > > CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnregis.o > > CC > arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-auto/fnbasicf.o > > CC > arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-custom/zfnf.o > > CC arch-linux-c-opt/obj/sys/classes/fn/impls/invsqrt/fninvsqrt.o > > CC > arch-linux-c-opt/obj/sys/classes/fn/impls/rational/fnrational.o > > CC > arch-linux-c-opt/obj/sys/classes/st/impls/cayley/ftn-auto/cayleyf.o > > CC > arch-linux-c-opt/obj/sys/classes/st/impls/precond/ftn-auto/precondf.o > > CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/cayley.o > > CC > arch-linux-c-opt/obj/sys/classes/st/impls/filter/ftn-auto/filterf.o > > CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/precond.o > > CC arch-linux-c-opt/obj/sys/classes/st/impls/sinvert/sinvert.o > > CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filter.o > > CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnbasic.o > > CC arch-linux-c-opt/obj/sys/classes/st/impls/shift/shift.o > > CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/shell.o > > CC > arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-auto/shellf.o > > CC > arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-custom/zshell.o > > CC arch-linux-c-opt/obj/sys/classes/fn/impls/exp/fnexp.o > > CC arch-linux-c-opt/obj/sys/classes/st/interface/stregis.o > > CC > arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsetf.o > > CC arch-linux-c-opt/obj/sys/classes/st/interface/stset.o > > CC > arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stfuncf.o > > CC > arch-linux-c-opt/obj/sys/classes/st/interface/ftn-custom/zstf.o > > CC arch-linux-c-opt/obj/sys/classes/st/interface/stshellmat.o > > CC > arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stslesf.o > > CC arch-linux-c-opt/obj/sys/classes/st/interface/stfunc.o > > CC arch-linux-c-opt/obj/sys/classes/st/interface/stsles.o > > CC > arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsolvef.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/ftn-auto/bvtensorf.o > > CC arch-linux-c-opt/obj/sys/classes/st/interface/stsolve.o > > CC arch-linux-c-opt/obj/sys/classes/bv/impls/contiguous/contig.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbiorthog.o > > CC arch-linux-c-opt/obj/sys/classes/bv/impls/mat/bvmat.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvblas.o > > CC arch-linux-c-opt/obj/sys/classes/bv/impls/svec/svec.o > > CC arch-linux-c-opt/obj/sys/classes/bv/impls/vecs/vecs.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvkrylov.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvfunc.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvregis.o > > CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/bvtensor.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbasic.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvcontour.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-custom/zbvf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbiorthogf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbasicf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvcontourf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvfuncf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvglobalf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvkrylovf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvopsf.o > > CC > arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvorthogf.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvops.o > > CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filtlan.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvglobal.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvlapack.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/ftn-auto/dshsvdf.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/impls/svd/ftn-auto/dssvdf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/dsutil.o > > CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvorthog.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-auto/dspepf.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-custom/zdspepf.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/impls/nep/ftn-auto/dsnepf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghep/dsghep.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhepts/dsnhepts.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/dssvd.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/gnhep/dsgnhep.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/dspep.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhep/dsnhep.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/dshsvd.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/dsnep.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/hz.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dmerg2.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dlaed3m.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/ftn-auto/dsgsvdf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsbtdc.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsrtdf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dibtdc.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsbasicf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsbasic.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-custom/zdsf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/invit.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsopsf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsops.o > > CC > arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsprivf.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/dshep.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/dsghiep.o > > CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/ftn-auto/lobpcgf.o > > CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/ftn-auto/rqcgf.o > > CC arch-linux-c-opt/obj/eps/impls/lyapii/ftn-auto/lyapiif.o > > CC arch-linux-c-opt/obj/sys/classes/ds/interface/dspriv.o > > CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/dsgsvd.o > > CC arch-linux-c-opt/obj/eps/impls/subspace/subspace.o > > CC arch-linux-c-opt/obj/eps/impls/external/scalapack/scalapack.o > > CC arch-linux-c-opt/obj/eps/impls/lapack/lapack.o > > CC arch-linux-c-opt/obj/eps/impls/ciss/ftn-auto/cissf.o > > CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/rqcg.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdschm.o > > CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/lobpcg.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/davidson.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdtestconv.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdinitv.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdgd2.o > > CC arch-linux-c-opt/obj/eps/impls/lyapii/lyapii.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/jd/ftn-auto/jdf.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/gd/ftn-auto/gdf.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdcalcpairs.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/gd/gd.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdutils.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/jd/jd.o > > CC > arch-linux-c-opt/obj/eps/impls/krylov/lanczos/ftn-auto/lanczosf.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdupdatev.o > > CC > arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/ftn-auto/arnoldif.o > > CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/arnoldi.o > > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-indef.o > > CC arch-linux-c-opt/obj/eps/impls/krylov/epskrylov.o > > CC arch-linux-c-opt/obj/eps/impls/davidson/dvdimprovex.o > > CC arch-linux-c-opt/obj/eps/impls/ciss/ciss.o > > CC > arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-custom/zkrylovschurf.o > > CC > arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.o > > CC arch-linux-c-opt/obj/eps/impls/power/ftn-auto/powerf.o > > CC > arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-twosided.o > > CC arch-linux-c-opt/obj/eps/interface/dlregiseps.o > > CC arch-linux-c-opt/obj/eps/interface/epsbasic.o > > CC arch-linux-c-opt/obj/eps/interface/epsregis.o > > CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/lanczos.o > > CC arch-linux-c-opt/obj/eps/interface/epsdefault.o > > CC arch-linux-c-opt/obj/eps/interface/epsmon.o > > CC > arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/krylovschur.o > > CC arch-linux-c-opt/obj/eps/interface/epsopts.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsbasicf.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsdefaultf.o > > CC arch-linux-c-opt/obj/eps/interface/epssetup.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsmonf.o > > CC arch-linux-c-opt/obj/eps/impls/power/power.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssetupf.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsviewf.o > > CC arch-linux-c-opt/obj/eps/interface/epssolve.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsoptsf.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssolvef.o > > CC arch-linux-c-opt/obj/eps/interface/ftn-custom/zepsf.o > > CC arch-linux-c-opt/obj/svd/impls/lanczos/ftn-auto/gklanczosf.o > > CC arch-linux-c-opt/obj/svd/impls/cross/ftn-auto/crossf.o > > CC arch-linux-c-opt/obj/eps/interface/epsview.o > > CC arch-linux-c-opt/obj/svd/impls/external/scalapack/svdscalap.o > > CC arch-linux-c-opt/obj/svd/impls/randomized/rsvd.o > > CC arch-linux-c-opt/obj/svd/impls/trlanczos/ftn-auto/trlanczosf.o > > CC arch-linux-c-opt/obj/svd/impls/cyclic/ftn-auto/cyclicf.o > > CC arch-linux-c-opt/obj/svd/interface/dlregissvd.o > > CC arch-linux-c-opt/obj/svd/interface/svdbasic.o > > CC arch-linux-c-opt/obj/svd/impls/lapack/svdlapack.o > > CC arch-linux-c-opt/obj/svd/impls/lanczos/gklanczos.o > > CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-slice.o > > CC arch-linux-c-opt/obj/svd/interface/svddefault.o > > CC arch-linux-c-opt/obj/svd/impls/cross/cross.o > > CC arch-linux-c-opt/obj/svd/interface/svdregis.o > > CC arch-linux-c-opt/obj/svd/interface/svdmon.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdbasicf.o > > CC arch-linux-c-opt/obj/svd/interface/svdopts.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svddefaultf.o > > CC arch-linux-c-opt/obj/svd/interface/svdsetup.o > > CC arch-linux-c-opt/obj/svd/interface/svdsolve.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdmonf.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdoptsf.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsetupf.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-custom/zsvdf.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsolvef.o > > CC arch-linux-c-opt/obj/svd/interface/svdview.o > > CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdviewf.o > > CC > arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/ftn-auto/qarnoldif.o > > CC arch-linux-c-opt/obj/pep/impls/peputils.o > > CC arch-linux-c-opt/obj/svd/impls/cyclic/cyclic.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/qslicef.o > > CC > arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-custom/zstoarf.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/pepkrylov.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/stoarf.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ftn-auto/ptoarf.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/qarnoldi.o > > CC arch-linux-c-opt/obj/pep/impls/linear/ftn-auto/linearf.o > > CC arch-linux-c-opt/obj/pep/impls/linear/qeplin.o > > CC arch-linux-c-opt/obj/pep/impls/jd/ftn-auto/pjdf.o > > CC arch-linux-c-opt/obj/pep/interface/dlregispep.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/stoar.o > > CC arch-linux-c-opt/obj/pep/interface/pepbasic.o > > CC arch-linux-c-opt/obj/pep/interface/pepmon.o > > CC arch-linux-c-opt/obj/pep/impls/linear/linear.o > > CC arch-linux-c-opt/obj/pep/interface/pepdefault.o > > CC arch-linux-c-opt/obj/svd/impls/trlanczos/trlanczos.o > > CC arch-linux-c-opt/obj/pep/interface/pepregis.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ptoar.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepbasicf.o > > CC arch-linux-c-opt/obj/pep/interface/pepopts.o > > CC arch-linux-c-opt/obj/pep/interface/pepsetup.o > > CC arch-linux-c-opt/obj/pep/interface/pepsolve.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepdefaultf.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepmonf.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepoptsf.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsetupf.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-custom/zpepf.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepviewf.o > > CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsolvef.o > > CC arch-linux-c-opt/obj/pep/interface/peprefine.o > > CC arch-linux-c-opt/obj/pep/interface/pepview.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/qslice.o > > CC arch-linux-c-opt/obj/nep/impls/slp/ftn-auto/slpf.o > > CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-custom/znleigsf.o > > CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigs-fullbf.o > > CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigsf.o > > CC arch-linux-c-opt/obj/nep/impls/interpol/ftn-auto/interpolf.o > > CC arch-linux-c-opt/obj/nep/impls/slp/slp.o > > CC arch-linux-c-opt/obj/nep/impls/narnoldi/ftn-auto/narnoldif.o > > CC arch-linux-c-opt/obj/nep/impls/slp/slp-twosided.o > > CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs-fullb.o > > CC arch-linux-c-opt/obj/nep/impls/interpol/interpol.o > > CC arch-linux-c-opt/obj/nep/impls/rii/ftn-auto/riif.o > > CC arch-linux-c-opt/obj/nep/interface/dlregisnep.o > > CC arch-linux-c-opt/obj/nep/impls/narnoldi/narnoldi.o > > CC arch-linux-c-opt/obj/pep/impls/krylov/toar/nrefine.o > > CC arch-linux-c-opt/obj/nep/interface/nepdefault.o > > CC arch-linux-c-opt/obj/nep/interface/nepregis.o > > CC arch-linux-c-opt/obj/nep/impls/rii/rii.o > > CC arch-linux-c-opt/obj/nep/interface/nepbasic.o > > CC arch-linux-c-opt/obj/nep/interface/nepmon.o > > CC arch-linux-c-opt/obj/pep/impls/jd/pjd.o > > CC arch-linux-c-opt/obj/nep/interface/nepresolv.o > > CC arch-linux-c-opt/obj/nep/interface/nepopts.o > > CC arch-linux-c-opt/obj/nep/impls/nepdefl.o > > CC arch-linux-c-opt/obj/nep/interface/nepsetup.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepdefaultf.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepbasicf.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepmonf.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepoptsf.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepresolvf.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsetupf.o > > CC arch-linux-c-opt/obj/nep/interface/nepsolve.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsolvef.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepviewf.o > > CC arch-linux-c-opt/obj/nep/interface/ftn-custom/znepf.o > > CC arch-linux-c-opt/obj/mfn/interface/dlregismfn.o > > CC arch-linux-c-opt/obj/mfn/impls/krylov/mfnkrylov.o > > CC arch-linux-c-opt/obj/nep/interface/nepview.o > > CC arch-linux-c-opt/obj/nep/interface/neprefine.o > > CC arch-linux-c-opt/obj/mfn/interface/mfnmon.o > > CC arch-linux-c-opt/obj/mfn/interface/mfnregis.o > > CC arch-linux-c-opt/obj/mfn/impls/expokit/mfnexpokit.o > > CC arch-linux-c-opt/obj/mfn/interface/mfnopts.o > > CC arch-linux-c-opt/obj/mfn/interface/mfnbasic.o > > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnbasicf.o > > CC arch-linux-c-opt/obj/mfn/interface/mfnsolve.o > > CC arch-linux-c-opt/obj/mfn/interface/mfnsetup.o > > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnmonf.o > > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnoptsf.o > > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsetupf.o > > CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsolvef.o > > CC arch-linux-c-opt/obj/mfn/interface/ftn-custom/zmfnf.o > > CC arch-linux-c-opt/obj/lme/interface/dlregislme.o > > CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs.o > > CC arch-linux-c-opt/obj/lme/interface/lmeregis.o > > CC arch-linux-c-opt/obj/lme/interface/lmemon.o > > CC arch-linux-c-opt/obj/lme/impls/krylov/lmekrylov.o > > CC arch-linux-c-opt/obj/lme/interface/lmebasic.o > > CC arch-linux-c-opt/obj/lme/interface/lmeopts.o > > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmemonf.o > > CC arch-linux-c-opt/obj/lme/interface/lmesetup.o > > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmebasicf.o > > CC arch-linux-c-opt/obj/lme/interface/lmesolve.o > > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmeoptsf.o > > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesolvef.o > > CC arch-linux-c-opt/obj/lme/interface/lmedense.o > > CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesetupf.o > > CC arch-linux-c-opt/obj/lme/interface/ftn-custom/zlmef.o > > FC arch-linux-c-opt/obj/sys/classes/rg/f90-mod/slepcrgmod.o > > FC arch-linux-c-opt/obj/sys/classes/bv/f90-mod/slepcbvmod.o > > FC arch-linux-c-opt/obj/sys/classes/fn/f90-mod/slepcfnmod.o > > FC arch-linux-c-opt/obj/lme/f90-mod/slepclmemod.o > > FC arch-linux-c-opt/obj/sys/classes/ds/f90-mod/slepcdsmod.o > > FC arch-linux-c-opt/obj/sys/classes/st/f90-mod/slepcstmod.o > > FC arch-linux-c-opt/obj/mfn/f90-mod/slepcmfnmod.o > > FC arch-linux-c-opt/obj/eps/f90-mod/slepcepsmod.o > > FC arch-linux-c-opt/obj/svd/f90-mod/slepcsvdmod.o > > FC arch-linux-c-opt/obj/pep/f90-mod/slepcpepmod.o > > FC arch-linux-c-opt/obj/nep/f90-mod/slepcnepmod.o > > CLINKER arch-linux-c-opt/lib/libslepc.so.3.019.0 > > Now to install the library do: > > make > SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > PETSC_DIR=/home/vrkaka/SLlibs/petsc install > > ========================================= > > *** Installing SLEPc *** > > *** Installing SLEPc at prefix location: > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt *** > > ==================================== > > Install complete. > > Now to check if the libraries are working do (in current directory): > > make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt > PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check > > ==================================== > > /usr/bin/gmake --no-print-directory -f makefile > PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc > SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > install-builtafterslepc > > /usr/bin/gmake --no-print-directory -f makefile > PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc > SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc > slepc4py-install > > gmake[6]: Nothing to be done for 'slepc4py-install'. > > ========================================= > > Now to check if the libraries are working do: > > make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check > > ========================================= > > > > > > > > > > and here is the cmake message when configuring the project: > > > > vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ cmake .. > > -- The CXX compiler identification is GNU 11.3.0 > > -- Detecting CXX compiler ABI info > > -- Detecting CXX compiler ABI info - done > > -- Check for working CXX compiler: /usr/bin/c++ - skipped > > -- Detecting CXX compile features > > -- Detecting CXX compile features - done > > -- MPI headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > -- MPI library found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmpich.so > > -- GMSH HEADERS NOT FOUND (OPTIONAL) > > -- GMSH LIBRARY NOT FOUND (OPTIONAL) > > -- Ipopt headers found at /home/vrkaka/Ipopt/installation/include/coin-or > > -- Ipopt library found at /home/vrkaka/Ipopt/installation/lib/libipopt.so > > -- Blas header cblas.h found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > -- Blas library found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libopenblas.so > > -- Metis headers found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > -- Metis library found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmetis.so > > -- Mumps headers found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > -- Mumps library found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libcmumps.a > > -- Petsc header petsc.h found at /home/vrkaka/SLlibs/petsc/include > > -- Petsc header petscconf.h found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > -- Petsc library found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libpetsc.so > > -- Slepc headers found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include > > -- Slepc library found at > /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libslepc.so > > -- Configuring done > > -- Generating done > > -- Build files have been written to: /home/vrkaka/sparselizardipopt/build > > > > > > > > After that building the project with cmake goes fine and a simple mpi test > works > > > > > > -Kalle > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ysjosh.lo at gmail.com Thu Jun 8 12:14:39 2023 From: ysjosh.lo at gmail.com (YuSh Lo) Date: Thu, 8 Jun 2023 12:14:39 -0500 Subject: [petsc-users] IS natural numbering to global numbering Message-ID: Hi, I have an IS that contains some vertex that is in natural numbering. How do I map them to global numbering without being distributed? Thanks, Josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.garcia at tum.de Thu Jun 8 17:00:01 2023 From: nicolas.garcia at tum.de (Nicolas Garcia Guzman) Date: Thu, 8 Jun 2023 22:00:01 +0000 Subject: [petsc-users] Behavior of KSP iterations when using Restart Message-ID: Hello, I am solving a linear system using petsc4py, with the following command: python main.py -ksp_type gmres -ksp_gmres_restart 16 -ksp_max_it 180000 -ksp_monitor -ksp_converged_reason -ksp_rtol 1e-15 -pc_type asm -sub_pc_type ilu -sub_pc_factor_levels 1 -sub_ksp_type preonly In the script all I do is import the libraries, load the linear system, set options and solve. However, simply changing the restart parameter to something like 20, will make the solution iterate to exactly 2 times the restart parameter, and then say the iteration diverged. This happens every time no matter the parameter chosen, unless it's 16 or less. Is this expected behavior or is the problem coming from my linear system? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jun 8 21:13:35 2023 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 8 Jun 2023 22:13:35 -0400 Subject: [petsc-users] Behavior of KSP iterations when using Restart In-Reply-To: References: Message-ID: On Thu, Jun 8, 2023 at 9:13?PM Nicolas Garcia Guzman wrote: > Hello, > > > I am solving a linear system using petsc4py, with the following command: > > > python main.py -ksp_type gmres -ksp_gmres_restart 16 -ksp_max_it 180000 > -ksp_monitor -ksp_converged_reason -ksp_rtol 1e-15 -pc_type asm > -sub_pc_type ilu -sub_pc_factor_levels 1 -sub_ksp_type preonly > > > In the script all I do is import the libraries, load the linear system, > set options and solve. > > > However, simply changing the restart parameter to something like 20, will > make the solution iterate to exactly 2 times the restart parameter, and > then say the iteration diverged. This happens every time no matter the > parameter chosen, unless it's 16 or less. Is this expected behavior or is > the problem coming from my linear system? > 1) Always send the complete output 2) I am guessing that the iteration stops because the restarted residual is too large.This could be due to your linear system. What kind of system are you solving? Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalle.karhapaa at tuni.fi Fri Jun 9 00:46:50 2023 From: kalle.karhapaa at tuni.fi (=?iso-8859-1?Q?Kalle_Karhap=E4=E4_=28TAU=29?=) Date: Fri, 9 Jun 2023 05:46:50 +0000 Subject: [petsc-users] PETSc downloading older version of OpenBLAS Message-ID: Hi all During install I'm checking out an older version of PETSc (v3.17.1-512-g27c9ef7be8) but running into problems with -download-openblas in configure. I suspect the newest version of OpenBLAS that is being downloaded from git is incompatible with this older version of petsc Is there a way to have petsc -download an older version of openblas (eg. v0.3.20) ? Thanks -Kalle -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalle.karhapaa at tuni.fi Fri Jun 9 02:31:21 2023 From: kalle.karhapaa at tuni.fi (=?iso-8859-1?Q?Kalle_Karhap=E4=E4_=28TAU=29?=) Date: Fri, 9 Jun 2023 07:31:21 +0000 Subject: [petsc-users] PETSc downloading older version of OpenBLAS In-Reply-To: References: Message-ID: I managed to download a compatible openBLAS with conf ./configure --download-openblas --download-openblas-commit='0b678b19dc03f2a999d6e038814c4c50b9640a4e' --with-openmp --with-mpi=0 --with-shared-libraries=1 --with-mumps-serial=1 --download-mumps --download-metis --download-slepc --with-debugging=0 --with-scalar-type=real --with-x=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3'; --download-openblas-commit= didn't work by itself but with -download-openblas it did case closed! -Kalle From: Kalle Karhap?? (TAU) Sent: perjantai 9. kes?kuuta 2023 8.47 To: petsc-users at mcs.anl.gov Subject: PETSc downloading older version of OpenBLAS Hi all During install I'm checking out an older version of PETSc (v3.17.1-512-g27c9ef7be8) but running into problems with -download-openblas in configure. I suspect the newest version of OpenBLAS that is being downloaded from git is incompatible with this older version of petsc Is there a way to have petsc -download an older version of openblas (eg. v0.3.20) ? Thanks -Kalle -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalle.karhapaa at tuni.fi Fri Jun 9 04:24:17 2023 From: kalle.karhapaa at tuni.fi (=?utf-8?B?S2FsbGUgS2FyaGFww6TDpCAoVEFVKQ==?=) Date: Fri, 9 Jun 2023 09:24:17 +0000 Subject: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT In-Reply-To: References: Message-ID: We ended up installing an earlier petsc version (earlier openblas version was needed as well) and making some changes to the code. The PMI/MPI errors from before stopped showing up and commands ~/sparselizardipopt/build$ mpirun -np 2 ./simulations/default/default 1e2 ~/sparselizardipopt/build$ mpiexec -np 2 ./simulations/default/default 1e2 are working without changes to PATH case closed for now, thanks for the help! -Kalle From: Junchao Zhang Sent: torstai 8. kes?kuuta 2023 15.56 To: Kalle Karhap?? (TAU) Cc: Barry Smith ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT It means the mpiexec in your original command line vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ mpiexec -np 2 ./simulations/default/default 1e2 was not /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec Try to use the full path or add /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin in your PATH --Junchao Zhang On Thu, Jun 8, 2023 at 12:31?AM Kalle Karhap?? (TAU) > wrote: Thanks Barry, make check works: Running check examples to verify correct installation Using PETSC_DIR=/home/vrkaka/SLlibs/petsc and PETSC_ARCH=arch-linux-c-opt C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes C/C++ example src/snes/tutorials/ex19 run successfully with mumps C/C++ example src/vec/vec/tests/ex47 run successfully with hdf5 Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process Running check examples to verify correct installation Using SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc, PETSC_DIR=/home/vrkaka/SLlibs/petsc, and PETSC_ARCH=arch-linux-c-opt C/C++ example src/eps/tests/test10 run successfully with 1 MPI process C/C++ example src/eps/tests/test10 run successfully with 2 MPI process Fortran example src/eps/tests/test7f run successfully with 1 MPI process Completed SLEPc test examples Completed PETSc test examples make getmpiexec gives: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec which is the mpiexec petsc built From: Barry Smith > Sent: keskiviikko 7. kes?kuuta 2023 17.33 To: Kalle Karhap?? (TAU) > Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PMI/MPI error when running MPICH from PETSc with sparselizard/IPOPT Does make check work in the PETSc directory? Is it possible the mpiexec in "mpiexec -np 2 ./simulations/default/default 1e2" is not the mpiexec built by PETSc? In the PETSc directory you can run make getmpiexec to see what mpiexec PETSc built. On Jun 7, 2023, at 6:07 AM, Kalle Karhap?? (TAU) > wrote: Hi! I am using petsc in a topology optimization project with sparselizard and ipopt. I am hoping to use mpich to run sparselizard/ipopt calculations faster, but I?m getting the following error straight away: vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ mpiexec -np 2 ./simulations/default/default 1e2 [proxy:0:0 at WKS-101259-LT] HYD_pmcd_pmi_parse_pmi_cmd (pm/pmiserv/common.c:57): [proxy:0:0 at WKS-101259-LT] handle_pmi_cmd (pm/pmiserv/pmip_cb.c:115): unable to parse PMI command [proxy:0:0 at WKS-101259-LT] pmi_cb (pm/pmiserv/pmip_cb.c:362): unable to handle PMI command [proxy:0:0 at WKS-101259-LT] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status [proxy:0:0 at WKS-101259-LT] main (pm/pmiserv/pmip.c:169): demux engine error waiting for event the problem persists with different numbers of cores -np 1?10. Sometimes after the previous message there is the bonus error: Fatal error in internal_Init: Other MPI error, error stack: internal_Init(66): MPI_Init(argc=(nil), argv=(nil)) failed internal_Init(46): Cannot call MPI_INIT or MPI_INIT_THREAD more than once In petsc configuration I am downloading mpich. Then I?m building the sparselizard project with the same mpich downloaded through petsc installation. here is my petsc conf: ./configure --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3'; petsc install went as follows: vrkaka at WKS-101259-LT:~/sparselizardipopt/install_external_libs$ ./install_petsc.sh mkdir: cannot create directory ?/home/vrkaka/SLlibs?: File exists __________________________________________ FETCHING THE LATEST PETSC VERSION FROM GIT Cloning into 'petsc'... remote: Enumerating objects: 1097079, done. remote: Counting objects: 100% (687/687), done. remote: Compressing objects: 100% (144/144), done. remote: Total 1097079 (delta 555), reused 664 (delta 539), pack-reused 1096392 Receiving objects: 100% (1097079/1097079), 344.72 MiB | 7.14 MiB/s, done. Resolving deltas: 100% (840415/840415), done. __________________________________________ CONFIGURING PETSC ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= ============================================================================================= Trying to download https://github.com/pmodels/mpich/releases/download/v4.1.1/mpich-4.1.1.tar.gz for MPICH ============================================================================================= ============================================================================================= Running configure on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Running make on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Running make install on MPICH; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-sowing.git for SOWING ============================================================================================= ============================================================================================= Running configure on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running make on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running make install on SOWING; this may take several minutes ============================================================================================= ============================================================================================= Running arch-linux-c-opt/bin/bfort to generate Fortran stubs ============================================================================================= ============================================================================================= Trying to download http://www.zlib.net/zlib-1.2.13.tar.gz for ZLIB ============================================================================================= ============================================================================================= Building and installing zlib; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.2/src/hdf5-1.12.2.tar.bz2 for HDF5 ============================================================================================= ============================================================================================= Running configure on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Running make on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Running make install on HDF5; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/parallel-netcdf/pnetcdf for PNETCDF ============================================================================================= ============================================================================================= Running libtoolize on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running autoreconf on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running configure on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make install on PNETCDF; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/Unidata/netcdf-c/archive/v4.9.1.tar.gz for NETCDF ============================================================================================= ============================================================================================= Running configure on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Running make install on NETCDF; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-med.git for MED ============================================================================================= ============================================================================================= Configuring MED with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing MED; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/gsjaardema/seacas.git for EXODUSII ============================================================================================= ============================================================================================= Configuring EXODUSII with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing EXODUSII; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://bitbucket.org/petsc/pkg-metis.git for METIS ============================================================================================= ============================================================================================= Configuring METIS with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing METIS; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://github.com/xianyi/OpenBLAS.git for OPENBLAS ============================================================================================= ============================================================================================= Compiling OpenBLAS; this may take several minutes ============================================================================================= ============================================================================================= Installing OpenBLAS ============================================================================================= ============================================================================================= Trying to download https://github.com/Reference-ScaLAPACK/scalapack for SCALAPACK ============================================================================================= ============================================================================================= Configuring SCALAPACK with CMake; this may take several minutes ============================================================================================= ============================================================================================= Compiling and installing SCALAPACK; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://graal.ens-lyon.fr/MUMPS/MUMPS_5.6.0.tar.gz for MUMPS ============================================================================================= ============================================================================================= Compiling MUMPS; this may take several minutes ============================================================================================= ============================================================================================= Installing MUMPS; this may take several minutes ============================================================================================= ============================================================================================= Trying to download https://gitlab.com/slepc/slepc.git for SLEPC ============================================================================================= ============================================================================================= SLEPc examples are available at arch-linux-c-opt/externalpackages/git.slepc export SLEPC_DIR=arch-linux-c-opt ============================================================================================= Compilers: C Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 -fopenmp Version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 C++ Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -fopenmp Version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Fortran Compiler: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp Version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Linkers: Shared linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Dynamic linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -fopenmp -shared -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Libraries linked against: BlasLapack: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads uses 4 byte integers MPI: Version: 4 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec Implementation: mpich4 MPICH_NUMVERSION: 40101300 MPICH: python: Executable: /usr/bin/python3 openmp: Version: 201511 pthread: cmake: Version: 3.22.1 Executable: /usr/bin/cmake openblas: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lopenblas uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads zlib: Version: 1.2.13 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lz hdf5: Version: 1.12.2 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lhdf5_hl -lhdf5 netcdf: Version: 4.9.1 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lnetcdf pnetcdf: Version: 1.12.3 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lpnetcdf metis: Version: 5.1.0 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmetis slepc: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lslepc regex: MUMPS: Version: 5.6.0 Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -ldmumps -lmumps_common -lpord -lpthread uses OpenMP; use export OMP_NUM_THREADS=

or -omp_num_threads

to control the number of threads scalapack: Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lscalapack exodusii: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lexoIIv2for32 -lexodus med: Includes: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmedC -lmed sowing: Version: 1.1.26 Executable: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/bfort PETSc: Language used to compile PETSc: C PETSC_ARCH: arch-linux-c-opt PETSC_DIR: /home/vrkaka/SLlibs/petsc Prefix: Scalar type: real Precision: double Support for __float128 Integer size: 4 bytes Single library: yes Shared libraries: yes Memory alignment from malloc(): 16 bytes Using GNU make: /usr/bin/gmake xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt all xxx=========================================================================xxx __________________________________________ COMPILING PETSC /usr/bin/python3 ./config/gmakegen.py --petsc-arch=arch-linux-c-opt /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegentest.py --petsc-dir=/home/vrkaka/SLlibs/petsc --petsc-arch=arch-linux-c-opt --testdir=./arch-linux-c-opt/tests make: '/home/vrkaka/SLlibs/petsc' is up to date. make: 'arch-linux-c-opt' is up to date. /home/vrkaka/SLlibs/petsc/lib/petsc/bin/petscnagupgrade.py:14: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.version import LooseVersion as Version ========================================== See documentation/faq.html and documentation/bugreporting.html for help with installation problems. Please send EVERYTHING printed out below when reporting problems. Please check the mailing list archives and consider subscribing. https://petsc.org/release/community/mailing/ ========================================== Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:19:10 +0300 Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using PETSc directory: /home/vrkaka/SLlibs/petsc Using PETSc arch: arch-linux-c-opt ----------------------------------------- PETSC_VERSION_RELEASE 0 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 19 PETSC_VERSION_SUBMINOR 2 PETSC_VERSION_DATE "unknown" PETSC_VERSION_GIT "unknown" PETSC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using configure Options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 Using configuration flags: #define PETSC_ARCH "arch-linux-c-opt" #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define PETSC_DIR_SEPARATOR '/' #define PETSC_FORTRAN_CHARLEN_T size_t #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_ATTRIBUTEALIGNED 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_ATOMIC 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_CXX_DIALECT_CXX20 1 #define PETSC_HAVE_DLADDR 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_EXECUTABLE_EXPORT 1 #define PETSC_HAVE_EXODUSII 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_HDF5 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MED 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_METIS 1 #define PETSC_HAVE_MKSTEMP 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_MPICH 1 #define PETSC_HAVE_MPICH_NUMVERSION 40101300 #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_LARGE_COUNT 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_MUMPS 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_NETCDF 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_OPENBLAS 1 #define PETSC_HAVE_OPENMP 1 #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" #define PETSC_HAVE_PNETCDF 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_POSIX_MEMALIGN 1 #define PETSC_HAVE_PTHREAD 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_HAVE_REGEX 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_SCALAPACK 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SLEPC 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SOWING 1 #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_STDATOMIC_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HDF5_HAVE_PARALLEL 1 #define PETSC_HDF5_HAVE_ZLIB 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "so" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED __attribute((unused)) #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MALLOC_COALESCED 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC_VERSION_BRANCH_GIT "main" #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 ----------------------------------------- Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 mpicc -show: gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi C compiler version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp mpicxx -show: g++ -Wno-lto-type-mismatch -Wno-psabi -O3 -std=gnu++20 -fPIC -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpicxx -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi C++ compiler version: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp mpif90 -show: gfortran -fPIC -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -O3 -fallow-argument-mismatch -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -lmpifort -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi Fortran compiler version: GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 ----------------------------------------- Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 ----------------------------------------- Using system modules: Using mpi.h: # 1 "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include/mpi.h" 1 ----------------------------------------- Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ ------------------------------------------ Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec ------------------------------------------ Using MAKE: /usr/bin/gmake Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc ========================================== /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= libs FC arch-linux-c-opt/obj/sys/fsrc/somefort.o CXX arch-linux-c-opt/obj/sys/dll/cxx/demangle.o FC arch-linux-c-opt/obj/sys/f90-src/fsrc/f90_fwrap.o CC arch-linux-c-opt/obj/sys/f90-custom/zsysf90.o FC arch-linux-c-opt/obj/sys/f90-mod/petscsysmod.o CC arch-linux-c-opt/obj/sys/dll/dlimpl.o CC arch-linux-c-opt/obj/sys/dll/dl.o CC arch-linux-c-opt/obj/sys/dll/ftn-auto/regf.o CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostcontext.o CC arch-linux-c-opt/obj/sys/ftn-custom/zsys.o CXX arch-linux-c-opt/obj/sys/objects/device/impls/host/hostdevice.o CC arch-linux-c-opt/obj/sys/ftn-custom/zutils.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/global_dcontext.o CC arch-linux-c-opt/obj/sys/dll/reg.o CC arch-linux-c-opt/obj/sys/logging/xmlviewer.o CC arch-linux-c-opt/obj/sys/logging/utils/stack.o CC arch-linux-c-opt/obj/sys/logging/utils/classlog.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/device.o CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zpetscloghf.o CC arch-linux-c-opt/obj/sys/logging/utils/stagelog.o CC arch-linux-c-opt/obj/sys/logging/ftn-auto/xmllogeventf.o CC arch-linux-c-opt/obj/sys/logging/ftn-auto/plogf.o CC arch-linux-c-opt/obj/sys/logging/ftn-custom/zplogf.o CC arch-linux-c-opt/obj/sys/logging/utils/eventlog.o CC arch-linux-c-opt/obj/sys/python/ftn-custom/zpythonf.o CC arch-linux-c-opt/obj/sys/utils/arch.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/memory.o CC arch-linux-c-opt/obj/sys/python/pythonsys.o CC arch-linux-c-opt/obj/sys/utils/fhost.o CC arch-linux-c-opt/obj/sys/utils/fuser.o CC arch-linux-c-opt/obj/sys/utils/matheq.o CC arch-linux-c-opt/obj/sys/utils/mathclose.o CC arch-linux-c-opt/obj/sys/utils/mathfit.o CC arch-linux-c-opt/obj/sys/utils/mathinf.o CC arch-linux-c-opt/obj/sys/utils/ctable.o CC arch-linux-c-opt/obj/sys/utils/memc.o CC arch-linux-c-opt/obj/sys/utils/mpilong.o CC arch-linux-c-opt/obj/sys/logging/xmllogevent.o CC arch-linux-c-opt/obj/sys/utils/mpitr.o CC arch-linux-c-opt/obj/sys/utils/mpishm.o CC arch-linux-c-opt/obj/sys/utils/pbarrier.o CC arch-linux-c-opt/obj/sys/utils/mpiu.o CC arch-linux-c-opt/obj/sys/utils/psleep.o CC arch-linux-c-opt/obj/sys/utils/pdisplay.o CC arch-linux-c-opt/obj/sys/utils/psplit.o CC arch-linux-c-opt/obj/sys/utils/segbuffer.o CC arch-linux-c-opt/obj/sys/utils/mpimesg.o CC arch-linux-c-opt/obj/sys/utils/sortd.o CC arch-linux-c-opt/obj/sys/utils/sseenabled.o CC arch-linux-c-opt/obj/sys/utils/sortip.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zarchf.o CC arch-linux-c-opt/obj/sys/utils/mpits.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zfhostf.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zsortsof.o CC arch-linux-c-opt/obj/sys/utils/ftn-custom/zstrf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/memcf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpitsf.o CC arch-linux-c-opt/obj/sys/logging/plog.o CC arch-linux-c-opt/obj/sys/utils/str.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/mpiuf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psleepf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/psplitf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortdf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortipf.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortsof.o CC arch-linux-c-opt/obj/sys/utils/ftn-auto/sortif.o CC arch-linux-c-opt/obj/sys/totalview/tv_data_display.o CC arch-linux-c-opt/obj/sys/objects/gcomm.o CC arch-linux-c-opt/obj/sys/objects/gcookie.o CC arch-linux-c-opt/obj/sys/objects/fcallback.o CC arch-linux-c-opt/obj/sys/objects/destroy.o CC arch-linux-c-opt/obj/sys/objects/gtype.o CC arch-linux-c-opt/obj/sys/utils/sorti.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/dcontext.o CC arch-linux-c-opt/obj/sys/objects/olist.o CC arch-linux-c-opt/obj/sys/objects/garbage.o CC arch-linux-c-opt/obj/sys/objects/pgname.o CC arch-linux-c-opt/obj/sys/objects/package.o CC arch-linux-c-opt/obj/sys/objects/inherit.o CXX arch-linux-c-opt/obj/sys/objects/device/interface/mark_dcontext.o CC arch-linux-c-opt/obj/sys/utils/sortso.o CC arch-linux-c-opt/obj/sys/objects/aoptions.o CC arch-linux-c-opt/obj/sys/objects/prefix.o CC arch-linux-c-opt/obj/sys/objects/init.o CC arch-linux-c-opt/obj/sys/objects/pname.o CC arch-linux-c-opt/obj/sys/objects/ptype.o CC arch-linux-c-opt/obj/sys/objects/state.o CC arch-linux-c-opt/obj/sys/objects/version.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/destroyf.o CC arch-linux-c-opt/obj/sys/objects/device/util/memory.o CC arch-linux-c-opt/obj/sys/objects/device/util/devicereg.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcommf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/gcookief.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/inheritf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/optionsf.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/pinitf.o CC arch-linux-c-opt/obj/sys/objects/tagm.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/statef.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/subcommf.o CC arch-linux-c-opt/obj/sys/objects/subcomm.o CC arch-linux-c-opt/obj/sys/objects/ftn-auto/tagmf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgcommf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zdestroyf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zgtype.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zinheritf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsyamlf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpackage.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpgnamef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zpnamef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zprefixf.o CC arch-linux-c-opt/obj/sys/objects/pinit.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zptypef.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstartf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zversionf.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zstart.o CC arch-linux-c-opt/obj/sys/memory/mhbw.o CC arch-linux-c-opt/obj/sys/memory/mem.o CC arch-linux-c-opt/obj/sys/memory/ftn-auto/memf.o CC arch-linux-c-opt/obj/sys/memory/ftn-custom/zmtrf.o CC arch-linux-c-opt/obj/sys/memory/mal.o CC arch-linux-c-opt/obj/sys/memory/ftn-auto/mtrf.o CC arch-linux-c-opt/obj/sys/perfstubs/pstimer.o CC arch-linux-c-opt/obj/sys/error/errabort.o CC arch-linux-c-opt/obj/sys/error/checkptr.o CC arch-linux-c-opt/obj/sys/error/errstop.o CC arch-linux-c-opt/obj/sys/error/pstack.o CC arch-linux-c-opt/obj/sys/error/adebug.o CC arch-linux-c-opt/obj/sys/error/errtrace.o CC arch-linux-c-opt/obj/sys/error/fp.o CC arch-linux-c-opt/obj/sys/memory/mtr.o CC arch-linux-c-opt/obj/sys/error/signal.o CC arch-linux-c-opt/obj/sys/objects/ftn-custom/zoptionsf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/adebugf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/checkptrf.o CC arch-linux-c-opt/obj/sys/objects/options.o CC arch-linux-c-opt/obj/sys/error/ftn-custom/zerrf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/errf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/fpf.o CC arch-linux-c-opt/obj/sys/error/ftn-auto/signalf.o CC arch-linux-c-opt/obj/sys/error/err.o CC arch-linux-c-opt/obj/sys/fileio/fpath.o CC arch-linux-c-opt/obj/sys/fileio/fdir.o CC arch-linux-c-opt/obj/sys/fileio/fwd.o CC arch-linux-c-opt/obj/sys/fileio/ghome.o CC arch-linux-c-opt/obj/sys/fileio/ftest.o CC arch-linux-c-opt/obj/sys/fileio/grpath.o CC arch-linux-c-opt/obj/sys/fileio/rpath.o CC arch-linux-c-opt/obj/sys/fileio/mpiuopen.o CC arch-linux-c-opt/obj/sys/fileio/smatlab.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmpiuopenf.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zghomef.o CC arch-linux-c-opt/obj/sys/fileio/fretrieve.o CC arch-linux-c-opt/obj/sys/fileio/ftn-auto/sysiof.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zmprintf.o CC arch-linux-c-opt/obj/sys/info/ftn-auto/verboseinfof.o CC arch-linux-c-opt/obj/sys/fileio/ftn-custom/zsysiof.o CC arch-linux-c-opt/obj/sys/info/ftn-custom/zverboseinfof.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/axis.o CC arch-linux-c-opt/obj/sys/fileio/mprint.o CC arch-linux-c-opt/obj/sys/info/verboseinfo.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/bars.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/cmap.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/image.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/axisc.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/dscatter.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/lg.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/zoom.o CC arch-linux-c-opt/obj/sys/fileio/sysio.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zlgcf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/hists.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zzoomf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-custom/zaxisf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/axiscf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/barsf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/lgc.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/dscatterf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/histsf.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dcoor.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dclear.o CC arch-linux-c-opt/obj/sys/classes/draw/utils/ftn-auto/lgcf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dellipse.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dflush.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpause.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dline.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmarker.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dmouse.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dpoint.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawregall.o CC arch-linux-c-opt/obj/sys/objects/optionsyaml.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drect.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/drawreg.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/draw.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtext.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdrawregf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtextf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dsave.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-custom/zdtrif.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dtri.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dclearf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dcoorf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/dviewp.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dellipsef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dflushf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmousef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dmarkerf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dlinef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpausef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dpointf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawregf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drawf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/drectf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dsavef.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtextf.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dtrif.o CC arch-linux-c-opt/obj/sys/classes/draw/interface/ftn-auto/dviewpf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/ftn-auto/drawnullf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/null/drawnull.o CC arch-linux-c-opt/obj/sys/classes/random/interface/dlregisrand.o CC arch-linux-c-opt/obj/sys/classes/random/interface/random.o CC arch-linux-c-opt/obj/sys/classes/random/interface/randreg.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomcf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/tikz/tikz.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-custom/zrandomf.o CC arch-linux-c-opt/obj/sys/classes/random/interface/ftn-auto/randomf.o CC arch-linux-c-opt/obj/sys/classes/random/interface/randomc.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rand48/rand48.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rand/rand.o CC arch-linux-c-opt/obj/sys/classes/bag/ftn-auto/bagf.o CC arch-linux-c-opt/obj/sys/classes/random/impls/rander48/rander48.o CC arch-linux-c-opt/obj/sys/classes/bag/ftn-custom/zbagf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dupl.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/flush.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/dlregispetsc.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewa.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewers.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewasetf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewregall.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/view.o CC arch-linux-c-opt/obj/sys/classes/bag/f90-custom/zbagf90.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-custom/zviewaf.o CC arch-linux-c-opt/obj/sys/classes/draw/impls/image/drawimage.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewf.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/ftn-auto/viewregf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/ftn-auto/glvisf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-auto/drawvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/ftn-custom/zdrawvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-custom/zbinvf.o CC arch-linux-c-opt/obj/sys/classes/bag/bag.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/ftn-auto/binvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/f90-custom/zbinvf90.o CC arch-linux-c-opt/obj/sys/classes/viewer/interface/viewreg.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/ftn-custom/zsendf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-auto/hdf5vf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/ftn-custom/zstringvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/string/stringv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/ftn-custom/zhdf5f.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/draw/drawv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/socket/send.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/ftn-custom/zvtkvf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/glvis/glvis.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vu/petscvu.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/vtk/vtkv.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zvcreatef.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/filevf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-auto/vcreateaf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/vcreatea.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/ftn-custom/zfilevf.o CC arch-linux-c-opt/obj/sys/time/cputime.o CC arch-linux-c-opt/obj/sys/time/fdate.o CC arch-linux-c-opt/obj/sys/time/ftn-auto/cputimef.o CC arch-linux-c-opt/obj/sys/time/ftn-custom/zptimef.o CC arch-linux-c-opt/obj/sys/f90-src/f90_cwrap.o CC arch-linux-c-opt/obj/vec/pf/interface/pfall.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/hdf5/hdf5v.o CC arch-linux-c-opt/obj/vec/pf/interface/ftn-custom/zpff.o CC arch-linux-c-opt/obj/vec/pf/interface/ftn-auto/pff.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/binary/binv.o CC arch-linux-c-opt/obj/vec/pf/impls/constant/const.o CC arch-linux-c-opt/obj/vec/pf/interface/pf.o CC arch-linux-c-opt/obj/sys/classes/viewer/impls/ascii/filev.o CC arch-linux-c-opt/obj/vec/pf/impls/string/cstring.o CC arch-linux-c-opt/obj/vec/is/utils/isio.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zhdf5io.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zisltogf.o CC arch-linux-c-opt/obj/vec/is/utils/pmap.o CC arch-linux-c-opt/obj/vec/is/utils/hdf5io.o CC arch-linux-c-opt/obj/vec/is/utils/f90-custom/zisltogf90.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-custom/zvsectionisf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/isltogf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/pmapf.o CC arch-linux-c-opt/obj/vec/is/utils/ftn-auto/psortf.o CC arch-linux-c-opt/obj/vec/is/is/utils/f90-custom/ziscoloringf90.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-custom/ziscoloringf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isblockf.o CC arch-linux-c-opt/obj/vec/is/is/utils/iscomp.o CC arch-linux-c-opt/obj/vec/is/utils/psort.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscompf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/iscoloringf.o CC arch-linux-c-opt/obj/vec/is/is/utils/ftn-auto/isdifff.o CC arch-linux-c-opt/obj/vec/is/is/utils/isblock.o CC arch-linux-c-opt/obj/vec/is/is/interface/isreg.o CC arch-linux-c-opt/obj/vec/is/is/interface/isregall.o CC arch-linux-c-opt/obj/vec/is/is/interface/f90-custom/zindexf90.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/indexf.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-custom/zindexf.o CC arch-linux-c-opt/obj/vec/is/is/interface/ftn-auto/isregf.o CC arch-linux-c-opt/obj/vec/is/is/impls/stride/ftn-auto/stridef.o CC arch-linux-c-opt/obj/vec/is/is/utils/isdiff.o CC arch-linux-c-opt/obj/vec/is/is/utils/iscoloring.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-custom/zblockf.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/ftn-auto/blockf.o FC arch-linux-c-opt/obj/vec/f90-mod/petscvecmod.o CC arch-linux-c-opt/obj/vec/is/is/impls/f90-custom/zblockf90.o CC arch-linux-c-opt/obj/vec/is/is/impls/stride/stride.o CC arch-linux-c-opt/obj/vec/is/is/impls/general/ftn-auto/generalf.o CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-custom/zsectionf.o CC arch-linux-c-opt/obj/vec/is/section/interface/f90-custom/zvsectionisf90.o CC arch-linux-c-opt/obj/vec/is/section/interface/ftn-auto/sectionf.o CC arch-linux-c-opt/obj/vec/is/is/impls/block/block.o CC arch-linux-c-opt/obj/vec/is/ao/interface/aoreg.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ao.o CC arch-linux-c-opt/obj/vec/is/ao/interface/aoregall.o CC arch-linux-c-opt/obj/vec/is/ao/interface/dlregisdm.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-auto/aof.o CC arch-linux-c-opt/obj/vec/is/ao/interface/ftn-custom/zaof.o CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/ftn-custom/zaobasicf.o CC arch-linux-c-opt/obj/vec/is/section/interface/sectionhdf5.o CC arch-linux-c-opt/obj/vec/is/is/impls/general/general.o CC arch-linux-c-opt/obj/vec/is/utils/isltog.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-auto/aomappingf.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/ftn-custom/zaomappingf.o CC arch-linux-c-opt/obj/vec/is/is/interface/index.o CC arch-linux-c-opt/obj/vec/is/ao/impls/basic/aobasic.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-custom/zsfutilsf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfcoordf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/f90-custom/zsfutilsf90.o CC arch-linux-c-opt/obj/vec/is/ao/impls/mapping/aomapping.o CC arch-linux-c-opt/obj/vec/is/sf/utils/ftn-auto/sfutilsf.o CC arch-linux-c-opt/obj/vec/is/sf/utils/sfcoord.o CC arch-linux-c-opt/obj/vec/is/sf/interface/dlregissf.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sfregi.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zsf.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-custom/zvscat.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sftype.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/sff.o CC arch-linux-c-opt/obj/vec/is/sf/interface/ftn-auto/vscatf.o CC arch-linux-c-opt/obj/vec/is/ao/impls/memscalable/aomemscalable.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gather/sfgather.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/gatherv/sfgatherv.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfmpi.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/alltoall/sfalltoall.o CC arch-linux-c-opt/obj/vec/is/sf/utils/sfutils.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgather/sfallgather.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfbasic.o CC arch-linux-c-opt/obj/vec/is/sf/interface/vscat.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/neighbor/sfneighbor.o CC arch-linux-c-opt/obj/vec/vec/utils/vecglvis.o CC arch-linux-c-opt/obj/vec/is/section/interface/section.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/allgatherv/sfallgatherv.o CC arch-linux-c-opt/obj/vec/vec/utils/vecio.o CC arch-linux-c-opt/obj/vec/vec/utils/vecs.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/dlregistagger.o CC arch-linux-c-opt/obj/vec/vec/utils/comb.o CC arch-linux-c-opt/obj/vec/is/sf/impls/window/sfwindow.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/tagger.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/taggerregi.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/interface/ftn-auto/taggerf.o CC arch-linux-c-opt/obj/vec/vec/utils/vsection.o CC arch-linux-c-opt/obj/vec/vec/utils/projection.o CC arch-linux-c-opt/obj/vec/vec/utils/vecstash.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/absolute.o CC arch-linux-c-opt/obj/vec/is/sf/interface/sf.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/and.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/andor.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/or.o CC arch-linux-c-opt/obj/vec/vec/utils/f90-custom/zvsectionf90.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/relative.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/simple.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/combf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/projectionf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/veciof.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vsectionf.o CC arch-linux-c-opt/obj/vec/vec/utils/ftn-auto/vinvf.o CC arch-linux-c-opt/obj/vec/vec/utils/tagger/impls/cdf.o CC arch-linux-c-opt/obj/vec/vec/interface/veccreate.o CC arch-linux-c-opt/obj/vec/vec/interface/vecregall.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvecregf.o CC arch-linux-c-opt/obj/vec/vec/interface/dlregisvec.o CC arch-linux-c-opt/obj/vec/vec/interface/vecreg.o CC arch-linux-c-opt/obj/vec/vec/interface/f90-custom/zvectorf90.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/veccreatef.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/rvectorf.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-auto/vectorf.o CC arch-linux-c-opt/obj/vec/vec/interface/ftn-custom/zvectorf.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec3.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec1.o CC arch-linux-c-opt/obj/vec/vec/utils/vinv.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/vseqcr.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-custom/zbvec2f.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/ftn-auto/vseqcrf.o CC arch-linux-c-opt/obj/vec/vec/impls/shared/ftn-auto/shvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/shared/shvec.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-custom/zvecnestf.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/ftn-auto/vecnestf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/commonmpvec.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/dvec2.o CC arch-linux-c-opt/obj/vec/vec/interface/vector.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/vmpicr.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pvec2.o CC arch-linux-c-opt/obj/vec/vec/impls/seq/bvec2.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-custom/zpbvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/commonmpvecf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/vmpicrf.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/ftn-auto/pbvecf.o CC arch-linux-c-opt/obj/mat/coarsen/scoarsen.o CC arch-linux-c-opt/obj/mat/coarsen/ftn-auto/coarsenf.o CC arch-linux-c-opt/obj/mat/coarsen/ftn-custom/zcoarsenf.o CC arch-linux-c-opt/obj/vec/vec/interface/rvector.o CC arch-linux-c-opt/obj/mat/coarsen/coarsen.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pbvec.o CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/ftn-auto/miskf.o CC arch-linux-c-opt/obj/vec/vec/impls/nest/vecnest.o CC arch-linux-c-opt/obj/mat/color/utils/bipartite.o FC arch-linux-c-opt/obj/mat/f90-mod/petscmatmod.o CC arch-linux-c-opt/obj/mat/color/utils/valid.o CC arch-linux-c-opt/obj/mat/coarsen/impls/mis/mis.o CC arch-linux-c-opt/obj/mat/color/interface/matcoloring.o CC arch-linux-c-opt/obj/mat/color/interface/matcoloringregi.o CC arch-linux-c-opt/obj/mat/coarsen/impls/misk/misk.o CC arch-linux-c-opt/obj/mat/color/interface/ftn-custom/zmatcoloringf.o CC arch-linux-c-opt/obj/mat/color/interface/ftn-auto/matcoloringf.o CC arch-linux-c-opt/obj/mat/color/utils/weights.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/degr.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/numsrt.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/dsm.o CC arch-linux-c-opt/obj/vec/vec/impls/mpi/pdvec.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/ido.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/seq.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/setr.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/slo.o CC arch-linux-c-opt/obj/mat/color/impls/power/power.o CC arch-linux-c-opt/obj/mat/color/impls/minpack/color.o CC arch-linux-c-opt/obj/mat/color/impls/natural/natural.o CC arch-linux-c-opt/obj/mat/utils/bandwidth.o CC arch-linux-c-opt/obj/mat/utils/compressedrow.o CC arch-linux-c-opt/obj/mat/utils/convert.o CC arch-linux-c-opt/obj/mat/utils/freespace.o CC arch-linux-c-opt/obj/mat/coarsen/impls/hem/hem.o CC arch-linux-c-opt/obj/mat/utils/getcolv.o CC arch-linux-c-opt/obj/mat/utils/matio.o CC arch-linux-c-opt/obj/mat/utils/matstashspace.o CC arch-linux-c-opt/obj/mat/utils/axpy.o CC arch-linux-c-opt/obj/mat/color/impls/jp/jp.o CC arch-linux-c-opt/obj/mat/utils/pheap.o CC arch-linux-c-opt/obj/mat/utils/gcreate.o CC arch-linux-c-opt/obj/mat/utils/veccreatematdense.o CC arch-linux-c-opt/obj/mat/utils/overlapsplit.o CC arch-linux-c-opt/obj/mat/utils/zerodiag.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/axpyf.o CC arch-linux-c-opt/obj/mat/utils/multequal.o CC arch-linux-c-opt/obj/mat/utils/zerorows.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/bandwidthf.o CC arch-linux-c-opt/obj/mat/color/impls/greedy/greedy.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/gcreatef.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/getcolvf.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/multequalf.o CC arch-linux-c-opt/obj/mat/utils/ftn-auto/zerodiagf.o CC arch-linux-c-opt/obj/mat/order/degree.o CC arch-linux-c-opt/obj/mat/order/fn1wd.o CC arch-linux-c-opt/obj/mat/order/fndsep.o CC arch-linux-c-opt/obj/mat/order/fnroot.o CC arch-linux-c-opt/obj/mat/order/gen1wd.o CC arch-linux-c-opt/obj/mat/order/gennd.o CC arch-linux-c-opt/obj/mat/order/genrcm.o CC arch-linux-c-opt/obj/mat/order/genqmd.o CC arch-linux-c-opt/obj/mat/order/qmdqt.o CC arch-linux-c-opt/obj/mat/order/qmdmrg.o CC arch-linux-c-opt/obj/mat/order/qmdrch.o CC arch-linux-c-opt/obj/mat/utils/matstash.o CC arch-linux-c-opt/obj/mat/order/qmdupd.o CC arch-linux-c-opt/obj/mat/order/rcm.o CC arch-linux-c-opt/obj/mat/order/rootls.o CC arch-linux-c-opt/obj/mat/order/sp1wd.o CC arch-linux-c-opt/obj/mat/order/spnd.o CC arch-linux-c-opt/obj/mat/order/spqmd.o CC arch-linux-c-opt/obj/mat/order/sprcm.o CC arch-linux-c-opt/obj/mat/order/wbm.o CC arch-linux-c-opt/obj/mat/order/sregis.o CC arch-linux-c-opt/obj/mat/order/ftn-custom/zsorderf.o CC arch-linux-c-opt/obj/mat/order/sorder.o CC arch-linux-c-opt/obj/mat/order/ftn-auto/spectralf.o CC arch-linux-c-opt/obj/mat/order/spectral.o CC arch-linux-c-opt/obj/mat/order/metisnd/metisnd.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatnullf.o CC arch-linux-c-opt/obj/mat/interface/matregis.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatregf.o CC arch-linux-c-opt/obj/mat/interface/matreg.o CC arch-linux-c-opt/obj/mat/interface/matnull.o CC arch-linux-c-opt/obj/mat/interface/dlregismat.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matnullf.o CC arch-linux-c-opt/obj/mat/interface/f90-custom/zmatrixf90.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matproductf.o CC arch-linux-c-opt/obj/mat/ftn-custom/zmat.o CC arch-linux-c-opt/obj/mat/matfd/ftn-custom/zfdmatrixf.o CC arch-linux-c-opt/obj/mat/matfd/ftn-auto/fdmatrixf.o CC arch-linux-c-opt/obj/mat/interface/ftn-auto/matrixf.o CC arch-linux-c-opt/obj/mat/interface/matproduct.o CC arch-linux-c-opt/obj/mat/impls/transpose/transm.o CC arch-linux-c-opt/obj/mat/interface/ftn-custom/zmatrixf.o CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/htransmf.o CC arch-linux-c-opt/obj/mat/impls/transpose/ftn-auto/transmf.o CC arch-linux-c-opt/obj/mat/impls/transpose/htransm.o CC arch-linux-c-opt/obj/mat/matfd/fdmatrix.o CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmf.o CC arch-linux-c-opt/obj/mat/impls/normal/ftn-auto/normmhf.o CC arch-linux-c-opt/obj/mat/impls/python/ftn-custom/zpythonmf.o CC arch-linux-c-opt/obj/mat/impls/python/pythonmat.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/fdsell.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/ftn-custom/zsellf.o CC arch-linux-c-opt/obj/mat/impls/normal/normmh.o CC arch-linux-c-opt/obj/mat/impls/normal/normm.o CC arch-linux-c-opt/obj/mat/impls/is/ftn-auto/matisf.o CC arch-linux-c-opt/obj/mat/impls/shell/ftn-auto/shellf.o CC arch-linux-c-opt/obj/mat/impls/shell/ftn-custom/zshellf.o CC arch-linux-c-opt/obj/mat/impls/shell/shellcnv.o CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mmsell.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/aijsbaij.o CC arch-linux-c-opt/obj/mat/impls/shell/shell.o CC arch-linux-c-opt/obj/mat/impls/sell/seq/sell.o CC arch-linux-c-opt/obj/mat/impls/sell/mpi/mpisell.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact10.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact3.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact11.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact12.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij2.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact4.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact5.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact6.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact7.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-custom/zsbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sro.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact8.o CC arch-linux-c-opt/obj/mat/impls/is/matis.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/ftn-auto/sbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-custom/zmpisbaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact9.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpiaijsbaij.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/ftn-auto/mpisbaijf.o CC arch-linux-c-opt/obj/mat/impls/kaij/ftn-auto/kaijf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaij.o CC arch-linux-c-opt/obj/mat/interface/matrix.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-custom/zmpiadjf.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/ftn-auto/mpiadjf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mmsbaij.o CC arch-linux-c-opt/obj/mat/impls/diagonal/ftn-auto/diagonalf.o CC arch-linux-c-opt/obj/mat/impls/scalapack/ftn-auto/matscalapackf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/sbaijov.o CC arch-linux-c-opt/obj/mat/impls/lrc/ftn-auto/lrcf.o CC arch-linux-c-opt/obj/mat/impls/diagonal/diagonal.o CC arch-linux-c-opt/obj/mat/impls/lrc/lrc.o CC arch-linux-c-opt/obj/mat/impls/fft/ftn-custom/zfftf.o CC arch-linux-c-opt/obj/mat/impls/fft/fft.o CC arch-linux-c-opt/obj/mat/impls/dummy/matdummy.o CC arch-linux-c-opt/obj/mat/impls/submat/ftn-auto/submatf.o CC arch-linux-c-opt/obj/mat/impls/cdiagonal/ftn-auto/cdiagonalf.o CC arch-linux-c-opt/obj/mat/impls/sbaij/seq/sbaijfact2.o CC arch-linux-c-opt/obj/mat/impls/submat/submat.o CC arch-linux-c-opt/obj/mat/impls/cdiagonal/cdiagonal.o CC arch-linux-c-opt/obj/mat/impls/maij/ftn-auto/maijf.o CC arch-linux-c-opt/obj/mat/impls/composite/ftn-auto/mcompositef.o CC arch-linux-c-opt/obj/mat/impls/adj/mpi/mpiadj.o CC arch-linux-c-opt/obj/mat/impls/nest/ftn-custom/zmatnestf.o CC arch-linux-c-opt/obj/mat/impls/nest/ftn-auto/matnestf.o CC arch-linux-c-opt/obj/mat/impls/kaij/kaij.o CC arch-linux-c-opt/obj/mat/impls/composite/mcomposite.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijhdf5.o CC arch-linux-c-opt/obj/mat/impls/scalapack/matscalapack.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode2.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/fdaij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matptap.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matrart.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/mattransposematmult.o CC arch-linux-c-opt/obj/mat/impls/sbaij/mpi/mpisbaij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/symtranspose.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-custom/zaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/ftn-auto/aijf.o CC arch-linux-c-opt/obj/mat/impls/nest/matnest.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/basfactor.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijsell/aijsell.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/crl/crl.o CC arch-linux-c-opt/obj/mat/impls/maij/maij.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijfact.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aijperm/aijperm.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpb_aij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaijpc.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/bas/spbas.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimattransposematmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mmaij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/fdmpiaij.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/ftn-auto/mumpsf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijsell/mpiaijsell.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/matmatmult.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-auto/mpiaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/aijperm/mpiaijperm.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/ftn-custom/zmpiaijf.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/inode.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/crl/mcrl.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-custom/zdensef.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/densehdf5.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/ftn-auto/densef.o CC arch-linux-c-opt/obj/mat/impls/aij/seq/aij.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mmdense.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-custom/zmpidensef.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/ftn-auto/mpidensef.o CC arch-linux-c-opt/obj/mat/impls/preallocator/ftn-auto/matpreallocatorf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpimatmatmult.o CC arch-linux-c-opt/obj/mat/impls/preallocator/matpreallocator.o CC arch-linux-c-opt/obj/mat/impls/mffd/mffd.o CC arch-linux-c-opt/obj/mat/impls/mffd/mfregis.o CC arch-linux-c-opt/obj/mat/impls/mffd/mffddef.o CC arch-linux-c-opt/obj/mat/impls/mffd/wp.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffddeff.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-custom/zmffdf.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mumps/mumps.o CC arch-linux-c-opt/obj/mat/impls/dense/mpi/mpidense.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/wpf.o CC arch-linux-c-opt/obj/mat/impls/mffd/ftn-auto/mffdf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/aijbaij.o CC arch-linux-c-opt/obj/mat/impls/dense/seq/dense.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiptap.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact11.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiov.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact13.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact81.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat11.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact9.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat14.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolv.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat15.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijfact5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvnat7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrann.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtran7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat1.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgedi.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat4.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa3.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baijsolvtrannat7.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa4.o CC arch-linux-c-opt/obj/mat/impls/aij/mpi/mpiaij.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa5.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa2.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-custom/zbaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa6.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/ftn-auto/baijf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/dgefa7.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-auto/mpibaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/ftn-custom/zmpibaijf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpiaijbaij.o CC arch-linux-c-opt/obj/mat/impls/scatter/mscatter.o CC arch-linux-c-opt/obj/mat/impls/scatter/ftn-auto/mscatterf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpb_baij.o CC arch-linux-c-opt/obj/mat/impls/localref/ftn-auto/mlocalreff.o CC arch-linux-c-opt/obj/mat/impls/centering/ftn-auto/centeringf.o CC arch-linux-c-opt/obj/mat/impls/baij/seq/baij.o CC arch-linux-c-opt/obj/mat/impls/centering/centering.o CC arch-linux-c-opt/obj/mat/impls/localref/mlocalref.o CC arch-linux-c-opt/obj/mat/partition/spartition.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mmbaij.o CC arch-linux-c-opt/obj/mat/partition/ftn-auto/partitionf.o CC arch-linux-c-opt/obj/mat/partition/ftn-custom/zpartitionf.o CC arch-linux-c-opt/obj/dm/dt/space/interface/ftn-auto/spacef.o CC arch-linux-c-opt/obj/mat/partition/partition.o CC arch-linux-c-opt/obj/dm/dt/space/interface/space.o CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/ftn-auto/spaceptrimmedf.o CC arch-linux-c-opt/obj/mat/partition/impls/hierarchical/hierarchical.o CC arch-linux-c-opt/obj/dm/dt/space/impls/point/ftn-auto/spacepointf.o CC arch-linux-c-opt/obj/dm/dt/space/impls/ptrimmed/spaceptrimmed.o CC arch-linux-c-opt/obj/dm/dt/space/impls/point/spacepoint.o CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/ftn-auto/spacetensorf.o CC arch-linux-c-opt/obj/mat/impls/blockmat/seq/blockmat.o CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/ftn-auto/spacesumf.o CC arch-linux-c-opt/obj/dm/dt/space/impls/wxy/spacewxy.o CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/ftn-auto/spacesubspacef.o CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/ftn-auto/spacepolyf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/feceed.o CC arch-linux-c-opt/obj/dm/dt/space/impls/sum/spacesum.o CC arch-linux-c-opt/obj/dm/dt/space/impls/poly/spacepoly.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmmod.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-custom/zfef.o CC arch-linux-c-opt/obj/dm/dt/space/impls/tensor/spacetensor.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fegeomf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/ftn-auto/fef.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/baijov.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/fegeom.o CC arch-linux-c-opt/obj/dm/dt/space/impls/subspace/spacesubspace.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/fvceed.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-auto/fvf.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/ftn-custom/zfvf.o CC arch-linux-c-opt/obj/dm/dt/fe/impls/composite/fecomposite.o CC arch-linux-c-opt/obj/dm/dt/interface/dtprob.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdsf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtf.o CC arch-linux-c-opt/obj/dm/dt/fe/interface/fe.o CC arch-linux-c-opt/obj/dm/dt/fv/interface/fv.o CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtdsf90.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-custom/zdtfef.o CC arch-linux-c-opt/obj/dm/dt/interface/f90-custom/zdtf90.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtaltvf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtdsf.o CC arch-linux-c-opt/obj/dm/dt/fe/impls/basic/febasic.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtprobf.o CC arch-linux-c-opt/obj/dm/dt/interface/ftn-auto/dtweakformf.o CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/ftn-auto/dualspacef.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/ftn-auto/dualspacerefinedf.o CC arch-linux-c-opt/obj/dm/dt/interface/dtweakform.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/refined/dualspacerefined.o CC arch-linux-c-opt/obj/dm/dt/interface/dtaltv.o CC arch-linux-c-opt/obj/dm/dt/interface/dtds.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/ftn-auto/dspacelagrangef.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/ftn-auto/dspacesimplef.o CC arch-linux-c-opt/obj/dm/label/ftn-custom/zdmlabel.o CC arch-linux-c-opt/obj/dm/label/ftn-auto/dmlabelf.o CC arch-linux-c-opt/obj/mat/impls/baij/mpi/mpibaij.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/simple/dspacesimple.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/dmlabelephplex.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/plex/ftn-auto/dmlabelephplexf.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/ftn-auto/dmlabelephf.o CC arch-linux-c-opt/obj/dm/label/impls/ephemeral/dmlabeleph.o CC arch-linux-c-opt/obj/dm/interface/dmceed.o CC arch-linux-c-opt/obj/dm/interface/dlregisdmdm.o CC arch-linux-c-opt/obj/dm/interface/dmgenerate.o CC arch-linux-c-opt/obj/dm/dt/dualspace/interface/dualspace.o CC arch-linux-c-opt/obj/dm/interface/dmget.o CC arch-linux-c-opt/obj/dm/interface/dmglvis.o CC arch-linux-c-opt/obj/dm/interface/dmcoordinates.o CC arch-linux-c-opt/obj/dm/dt/interface/dt.o CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmgetf.o CC arch-linux-c-opt/obj/dm/interface/dmregall.o CC arch-linux-c-opt/obj/dm/interface/dmperiodicity.o CC arch-linux-c-opt/obj/dm/interface/ftn-custom/zdmf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmcoordinatesf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmgetf.o CC arch-linux-c-opt/obj/dm/interface/dmi.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmperiodicityf.o CC arch-linux-c-opt/obj/dm/interface/ftn-auto/dmf.o CC arch-linux-c-opt/obj/dm/field/interface/dlregisdmfield.o CC arch-linux-c-opt/obj/dm/field/interface/dmfieldregi.o CC arch-linux-c-opt/obj/dm/field/interface/ftn-auto/dmfieldf.o CC arch-linux-c-opt/obj/dm/field/interface/dmfield.o CC arch-linux-c-opt/obj/dm/field/impls/shell/dmfieldshell.o CC arch-linux-c-opt/obj/dm/impls/swarm/data_ex.o CC arch-linux-c-opt/obj/dm/impls/swarm/data_bucket.o CC arch-linux-c-opt/obj/dm/field/impls/da/dmfieldda.o CC arch-linux-c-opt/obj/dm/label/dmlabel.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarm_migrate.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_da.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_sort.o CC arch-linux-c-opt/obj/dm/impls/swarm/f90-custom/zswarmf90.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-custom/zswarm.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_plex.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic_view.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarm_migratef.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmpicf.o CC arch-linux-c-opt/obj/dm/impls/swarm/ftn-auto/swarmf.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarm.o CC arch-linux-c-opt/obj/dm/impls/swarm/swarmpic.o CC arch-linux-c-opt/obj/dm/impls/forest/ftn-auto/forestf.o CC arch-linux-c-opt/obj/dm/impls/shell/ftn-auto/dmshellf.o CC arch-linux-c-opt/obj/dm/impls/shell/ftn-custom/zdmshellf.o CC arch-linux-c-opt/obj/dm/dt/dualspace/impls/lagrange/dspacelagrange.o CC arch-linux-c-opt/obj/dm/impls/shell/dmshell.o CC arch-linux-c-opt/obj/dm/field/impls/ds/dmfieldds.o CC arch-linux-c-opt/obj/dm/impls/forest/forest.o CC arch-linux-c-opt/obj/dm/impls/stag/stagintern.o CC arch-linux-c-opt/obj/dm/impls/stag/stag1d.o CC arch-linux-c-opt/obj/dm/impls/stag/stagda.o CC arch-linux-c-opt/obj/dm/impls/stag/stag.o CC arch-linux-c-opt/obj/dm/interface/dm.o CC arch-linux-c-opt/obj/dm/impls/stag/stagstencil.o CC arch-linux-c-opt/obj/dm/impls/stag/stagmulti.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcgns.o CC arch-linux-c-opt/obj/dm/impls/plex/plexadapt.o CC arch-linux-c-opt/obj/dm/impls/plex/plexceed.o CC arch-linux-c-opt/obj/dm/impls/stag/stagutils.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcoarsen.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcheckinterface.o CC arch-linux-c-opt/obj/dm/impls/plex/plexegads.o CC arch-linux-c-opt/obj/dm/impls/plex/plexegadslite.o CC arch-linux-c-opt/obj/dm/impls/plex/plexextrude.o CC arch-linux-c-opt/obj/dm/impls/stag/stag2d.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgenerate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfvm.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfluent.o CC arch-linux-c-opt/obj/dm/impls/plex/plexexodusii.o CC arch-linux-c-opt/obj/dm/impls/plex/plexdistribute.o CC arch-linux-c-opt/obj/dm/impls/plex/plexglvis.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5xdmf.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhpddm.o CC arch-linux-c-opt/obj/dm/impls/plex/plexindices.o CC arch-linux-c-opt/obj/dm/impls/plex/plexmed.o CC arch-linux-c-opt/obj/dm/impls/plex/plexmetric.o CC arch-linux-c-opt/obj/dm/impls/stag/stag3d.o CC arch-linux-c-opt/obj/dm/impls/plex/plexhdf5.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgeometry.o CC arch-linux-c-opt/obj/dm/impls/plex/plexcreate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexnatural.o CC arch-linux-c-opt/obj/dm/impls/plex/plexinterpolate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpoint.o CC arch-linux-c-opt/obj/dm/impls/plex/plexply.o CC arch-linux-c-opt/obj/dm/impls/plex/plexrefine.o CC arch-linux-c-opt/obj/dm/impls/plex/plexorient.o CC arch-linux-c-opt/obj/dm/impls/plex/plexgmsh.o CC arch-linux-c-opt/obj/vec/is/sf/impls/basic/sfpack.o CC arch-linux-c-opt/obj/dm/impls/plex/plexreorder.o CC arch-linux-c-opt/obj/dm/impls/plex/plexproject.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpreallocate.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsection.o CC arch-linux-c-opt/obj/dm/impls/plex/plexpartition.o CC arch-linux-c-opt/obj/dm/impls/plex/pointqueue.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexf90.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexfemf90.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexgeometryf90.o CC arch-linux-c-opt/obj/dm/impls/plex/plexvtk.o CC arch-linux-c-opt/obj/dm/impls/plex/f90-custom/zplexsectionf90.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsfc.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/ftn-auto/plextransformf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/ftn-auto/plextrextrudef.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/1d/plexref1d.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/plexrefregular.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/regular/ftn-auto/plexrefregularf.o CC arch-linux-c-opt/obj/dm/impls/plex/plexfem.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/bl/plexrefbl.o CC arch-linux-c-opt/obj/dm/impls/plex/plexvtu.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/extrude/plextrextrude.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/alfeld/plexrefalfeld.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/tobox/plexreftobox.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcgnsf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/filter/plextrfilter.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcheckinterfacef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexcreatef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexegadsf.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/impls/refine/sbr/plexrefsbr.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexexodusiif.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexdistributef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfemf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexfvmf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgeometryf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexgmshf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexindicesf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexinterpolatef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexnaturalf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexorientf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpartitionf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexmetricf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexpointf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexprojectf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexrefinef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexreorderf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsfcf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plextreef.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-auto/plexsubmeshf.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexcreate.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexdistribute.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexexodusii.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexextrude.o CC arch-linux-c-opt/obj/dm/impls/plex/transform/interface/plextransform.o CC arch-linux-c-opt/obj/dm/impls/plex/plex.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexfluent.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexgmsh.o CC arch-linux-c-opt/obj/dm/impls/plex/ftn-custom/zplexsubmesh.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkcreatef.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkmonitorf.o CC arch-linux-c-opt/obj/dm/impls/network/networkmonitor.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkf.o CC arch-linux-c-opt/obj/dm/impls/network/ftn-auto/networkviewf.o CC arch-linux-c-opt/obj/dm/impls/patch/ftn-auto/patchcreatef.o CC arch-linux-c-opt/obj/dm/impls/network/networkview.o CC arch-linux-c-opt/obj/dm/impls/patch/patchcreate.o CC arch-linux-c-opt/obj/dm/impls/network/networkcreate.o CC arch-linux-c-opt/obj/dm/impls/composite/f90-custom/zfddaf90.o CC arch-linux-c-opt/obj/dm/impls/composite/ftn-auto/packf.o CC arch-linux-c-opt/obj/dm/impls/composite/ftn-custom/zfddaf.o CC arch-linux-c-opt/obj/dm/impls/patch/patch.o CC arch-linux-c-opt/obj/dm/impls/composite/packm.o CC arch-linux-c-opt/obj/dm/impls/product/product.o CC arch-linux-c-opt/obj/dm/impls/redundant/ftn-auto/dmredundantf.o CC arch-linux-c-opt/obj/dm/impls/product/productutils.o CC arch-linux-c-opt/obj/dm/impls/sliced/sliced.o CC arch-linux-c-opt/obj/dm/impls/redundant/dmredundant.o CC arch-linux-c-opt/obj/dm/impls/plex/plexsubmesh.o CC arch-linux-c-opt/obj/dm/impls/da/da1.o CC arch-linux-c-opt/obj/dm/impls/da/dacorn.o CC arch-linux-c-opt/obj/dm/impls/composite/pack.o CC arch-linux-c-opt/obj/dm/impls/da/da.o CC arch-linux-c-opt/obj/dm/impls/da/dadestroy.o CC arch-linux-c-opt/obj/dm/impls/da/dadist.o CC arch-linux-c-opt/obj/dm/impls/da/dacreate.o CC arch-linux-c-opt/obj/dm/impls/da/dadd.o CC arch-linux-c-opt/obj/dm/impls/plex/plextree.o CC arch-linux-c-opt/obj/dm/impls/da/da2.o CC arch-linux-c-opt/obj/dm/impls/da/dageometry.o CC arch-linux-c-opt/obj/dm/impls/da/daghost.o CC arch-linux-c-opt/obj/dm/impls/da/dagtona.o CC arch-linux-c-opt/obj/dm/impls/da/dagtol.o CC arch-linux-c-opt/obj/dm/impls/da/daindex.o CC arch-linux-c-opt/obj/dm/impls/da/dagetarray.o CC arch-linux-c-opt/obj/dm/impls/da/dagetelem.o CC arch-linux-c-opt/obj/dm/impls/da/daltol.o CC arch-linux-c-opt/obj/dm/impls/da/dapf.o CC arch-linux-c-opt/obj/dm/impls/da/dapreallocate.o CC arch-linux-c-opt/obj/dm/impls/da/dareg.o CC arch-linux-c-opt/obj/dm/impls/da/dascatter.o CC arch-linux-c-opt/obj/dm/impls/da/dalocal.o CC arch-linux-c-opt/obj/dm/impls/da/daview.o CC arch-linux-c-opt/obj/dm/impls/da/dasub.o CC arch-linux-c-opt/obj/dm/impls/da/f90-custom/zda1f90.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda1f.o CC arch-linux-c-opt/obj/dm/impls/da/gr1.o CC arch-linux-c-opt/obj/dm/impls/network/network.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda2f.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zda3f.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdacornf.o CC arch-linux-c-opt/obj/dm/impls/da/grglvis.o CC arch-linux-c-opt/obj/dm/impls/da/da3.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdagetscatterf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaindexf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdasubf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaghostf.o CC arch-linux-c-opt/obj/dm/impls/da/gr2.o CC arch-linux-c-opt/obj/dm/impls/da/dainterp.o CC arch-linux-c-opt/obj/dm/impls/da/grvtk.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacornf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-custom/zdaviewf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dacreatef.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daddf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dageometryf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dadistf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetarrayf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagetelemf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtolf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/daindexf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dagtonaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dalocalf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dainterpf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dapreallocatef.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/dasubf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/fddaf.o CC arch-linux-c-opt/obj/dm/impls/da/ftn-auto/gr1f.o CC arch-linux-c-opt/obj/dm/partitioner/interface/partitionerreg.o CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-custom/zpartitioner.o CC arch-linux-c-opt/obj/dm/partitioner/interface/ftn-auto/partitionerf.o CC arch-linux-c-opt/obj/dm/partitioner/impls/chaco/partchaco.o CC arch-linux-c-opt/obj/dm/partitioner/impls/gather/partgather.o CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/ftn-auto/partshellf.o CC arch-linux-c-opt/obj/dm/partitioner/interface/partitioner.o CC arch-linux-c-opt/obj/dm/partitioner/impls/shell/partshell.o CC arch-linux-c-opt/obj/dm/partitioner/impls/ptscotch/partptscotch.o CC arch-linux-c-opt/obj/dm/partitioner/impls/parmetis/partparmetis.o CC arch-linux-c-opt/obj/dm/partitioner/impls/matpart/partmatpart.o CC arch-linux-c-opt/obj/ksp/pc/interface/pcregis.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpcsetf.o CC arch-linux-c-opt/obj/ksp/pc/interface/pcset.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/pcsetf.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-custom/zpreconf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mat/ftn-auto/pcmatf.o CC arch-linux-c-opt/obj/dm/partitioner/impls/simple/partsimple.o CC arch-linux-c-opt/obj/ksp/pc/interface/ftn-auto/preconf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mat/pcmat.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/fmg.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-custom/zmgfuncf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/smg.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgadapt.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mgfunc.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/ftn-auto/mgfuncf.o CC arch-linux-c-opt/obj/ksp/pc/impls/wb/ftn-auto/wbf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/gdsw.o CC arch-linux-c-opt/obj/ksp/pc/interface/precon.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-auto/bjacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/ftn-custom/zbjacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/ftn-auto/pckspf.o CC arch-linux-c-opt/obj/ksp/pc/impls/none/none.o CC arch-linux-c-opt/obj/ksp/pc/impls/ksp/pcksp.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-auto/gasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/ftn-custom/zgasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/python/pythonpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/python/ftn-custom/zpythonpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/sor/ftn-auto/sorf.o CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/ftn-auto/hmgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/kaczmarz/kaczmarz.o CC arch-linux-c-opt/obj/ksp/pc/impls/sor/sor.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/ftn-auto/pcisf.o CC arch-linux-c-opt/obj/ksp/pc/impls/hmg/hmg.o CC arch-linux-c-opt/obj/dm/impls/da/fdda.o CC arch-linux-c-opt/obj/ksp/pc/impls/mg/mg.o CC arch-linux-c-opt/obj/ksp/pc/impls/bjacobi/bjacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/pcis.o CC arch-linux-c-opt/obj/ksp/pc/impls/wb/wb.o CC arch-linux-c-opt/obj/ksp/pc/impls/is/nn/nn.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/aggf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-custom/zgamgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/ftn-auto/gamgf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/util.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-auto/shellpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/ftn-auto/redistributef.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/ftn-custom/zshellpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/geo.o CC arch-linux-c-opt/obj/ksp/pc/impls/gasm/gasm.o CC arch-linux-c-opt/obj/ksp/pc/impls/shell/shellpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/agg.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/classical.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/ftn-auto/deflationf.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/bitmask.o CC arch-linux-c-opt/obj/ksp/pc/impls/redistribute/redistribute.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/tfs.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflation.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/comm.o CC arch-linux-c-opt/obj/ksp/pc/impls/gamg/gamg.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/ivec.o CC arch-linux-c-opt/obj/ksp/pc/impls/deflation/deflationspace.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xxt.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factimpl.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/lu/lu.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/gs.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/ftn-auto/choleskyf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/qr/qr.o CC arch-linux-c-opt/obj/ksp/pc/impls/tfs/xyt.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/factor.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-custom/zluf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ftn-auto/factorf.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/cholesky/cholesky.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/icc/icc.o CC arch-linux-c-opt/obj/ksp/pc/impls/factor/ilu/ilu.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-custom/zbddcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/ftn-auto/bddcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcnullspace.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-auto/fieldsplitf.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/ftn-custom/zfieldsplitf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcscalingbasic.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-custom/zcompositef.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/ftn-auto/compositef.o CC arch-linux-c-opt/obj/ksp/pc/impls/composite/composite.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcfetidp.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_coarsedm.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/ftn-auto/telescopef.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcgraph.o CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/ftn-auto/redundantf.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope.o CC arch-linux-c-opt/obj/ksp/pc/impls/redundant/redundant.o CC arch-linux-c-opt/obj/ksp/pc/impls/lsc/lsc.o CC arch-linux-c-opt/obj/ksp/pc/impls/svd/svd.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddc.o CC arch-linux-c-opt/obj/ksp/pc/impls/telescope/telescope_dmda.o CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/lmvmpc.o CC arch-linux-c-opt/obj/ksp/pc/impls/lmvm/ftn-auto/lmvmpcf.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-auto/asmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/ftn-auto/jacobif.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/ftn-custom/zasmf.o CC arch-linux-c-opt/obj/ksp/pc/impls/mpi/pcmpi.o CC arch-linux-c-opt/obj/ksp/pc/impls/jacobi/jacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/ftn-auto/galerkinf.o CC arch-linux-c-opt/obj/ksp/pc/impls/cp/cp.o CC arch-linux-c-opt/obj/ksp/pc/impls/galerkin/galerkin.o CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/ftn-auto/eisenf.o CC arch-linux-c-opt/obj/ksp/pc/impls/eisens/eisen.o CC arch-linux-c-opt/obj/ksp/pc/impls/fieldsplit/fieldsplit.o CC arch-linux-c-opt/obj/ksp/pc/impls/vpbjacobi/vpbjacobi.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcschurs.o CC arch-linux-c-opt/obj/ksp/ksp/interface/dlregisksp.o CC arch-linux-c-opt/obj/ksp/ksp/interface/dmksp.o CC arch-linux-c-opt/obj/ksp/pc/impls/pbjacobi/pbjacobi.o CC arch-linux-c-opt/obj/ksp/ksp/interface/iguess.o CC arch-linux-c-opt/obj/ksp/ksp/interface/eige.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itcreate.o CC arch-linux-c-opt/obj/ksp/pc/impls/asm/asm.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itregis.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itres.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itcl.o CC arch-linux-c-opt/obj/ksp/ksp/interface/xmon.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zdmkspf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/ziguess.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitclf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitcreatef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zxonf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-custom/zitfuncf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/f90-custom/zitfuncf90.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/eigef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itclf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iguessf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itcreatef.o CC arch-linux-c-opt/obj/ksp/ksp/interface/iterativ.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/iterativf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itresf.o CC arch-linux-c-opt/obj/ksp/ksp/interface/ftn-auto/itfuncf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/kspmatregi.o CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/ftn-auto/schurmf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbadbrdnf.o CC arch-linux-c-opt/obj/ksp/pc/impls/patch/pcpatch.o CC arch-linux-c-opt/obj/ksp/ksp/interface/itfunc.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmimpl.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/lmvmutils.o CC arch-linux-c-opt/obj/ksp/ksp/utils/dmproject.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbadbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/ftn-auto/symbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/ftn-auto/dfpf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/schurm/schurm.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/ftn-auto/diagbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/badbrdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/dfp/dfp.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/ftn-auto/brdnf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/ftn-auto/lmvmutilsf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/symbrdn/symbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/brdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/brdn/badbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/ftn-auto/bfgsf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/ftn-auto/sr1f.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/diagbrdn/diagbrdn.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/ftn-auto/fischerf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/ftn-auto/dmprojectf.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/bfgs/bfgs.o CC arch-linux-c-opt/obj/ksp/ksp/utils/lmvm/sr1/sr1.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmpre.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cgs/cgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/borthog2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lcd/lcd.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/fischer/fischer.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmreig.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmpref.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-custom/zgmres2f.o CC arch-linux-c-opt/obj/ksp/ksp/guess/impls/pod/pod.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/ftn-auto/gmresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-auto/modpcff.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/modpcf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/lgmres/lgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/gmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pgmres/pgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/ftn-auto/pipefgmresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/fgmres/fgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresleja.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tsirm/tsirm.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresdeflation.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/ftn-auto/lsqrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/pipefgmres/pipefgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/python/pythonksp.o CC arch-linux-c-opt/obj/ksp/ksp/impls/python/ftn-custom/zpythonkspf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/lsqr/lsqr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/ftn-auto/bcgslf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmresorthog.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bicg/bicg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/dgmres/dgmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/ftn-auto/minresf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgtype.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/ftn-auto/gltrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgeig.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgls.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gmres/agmres/agmres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgsl/bcgsl.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg/pipecg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/ftn-auto/cgtypef.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/stcg/stcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/minres/minres.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecgrr/pipecgrr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cgne/cgne.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/cg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/groppcg/groppcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/gltr/gltr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/ftn-auto/fcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/ftn-auto/pipefcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipeprcg/pipeprcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/nash/nash.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipecg2/pipecg2.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/ftn-auto/richscalef.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/richscale.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cg/pipelcg/pipelcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/ftn-auto/qcgf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/fcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fcg/pipefcg/pipefcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/betas.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tfqmr/tfqmr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/rich/rich.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/ftn-auto/chebyf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/qcg/qcg.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/bcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/qmrcgs/qmrcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgs/fbcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/ftn-auto/fetidpf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/symmlq/symmlq.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/ftn-auto/pipegcrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/ftn-auto/gcrf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/pipebcgs/pipebcgs.o CC arch-linux-c-opt/obj/ksp/ksp/impls/bcgs/fbcgsr/fbcgsr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/gcr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/preonly/preonly.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/pipecr/pipecr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cheby/cheby.o CC arch-linux-c-opt/obj/ksp/ksp/impls/cr/cr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/tcqmr/tcqmr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/gcr/pipegcr/pipegcr.o CC arch-linux-c-opt/obj/ksp/ksp/impls/ibcgs/ibcgs.o CC arch-linux-c-opt/obj/snes/utils/dmlocalsnes.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmdasnesf.o CC arch-linux-c-opt/obj/snes/utils/convest.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmlocalsnesf.o CC arch-linux-c-opt/obj/snes/utils/dmsnes.o CC arch-linux-c-opt/obj/snes/utils/dmdasnes.o CC arch-linux-c-opt/obj/snes/utils/ftn-custom/zdmsnesf.o CC arch-linux-c-opt/obj/ksp/ksp/impls/fetidp/fetidp.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/convestf.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmadaptf.o CC arch-linux-c-opt/obj/snes/utils/ftn-auto/dmplexsnesf.o CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearchregi.o CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-custom/zlinesearchf.o CC arch-linux-c-opt/obj/snes/linesearch/interface/ftn-auto/linesearchf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/ftn-auto/linesearchbtf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/ftn-custom/zlinesearchshellf.o CC arch-linux-c-opt/obj/snes/linesearch/impls/shell/linesearchshell.o CC arch-linux-c-opt/obj/snes/utils/dmadapt.o CC arch-linux-c-opt/obj/snes/linesearch/impls/basic/linesearchbasic.o CC arch-linux-c-opt/obj/snes/linesearch/interface/linesearch.o CC arch-linux-c-opt/obj/snes/linesearch/impls/cp/linesearchcp.o CC arch-linux-c-opt/obj/snes/linesearch/impls/bt/linesearchbt.o CC arch-linux-c-opt/obj/snes/interface/dlregissnes.o CC arch-linux-c-opt/obj/snes/linesearch/impls/nleqerr/linesearchnleqerr.o CC arch-linux-c-opt/obj/snes/linesearch/impls/l2/linesearchl2.o CC arch-linux-c-opt/obj/snes/interface/snesj2.o CC arch-linux-c-opt/obj/snes/interface/snesj.o CC arch-linux-c-opt/obj/snes/interface/snesregi.o CC arch-linux-c-opt/obj/snes/interface/snespc.o CC arch-linux-c-opt/obj/snes/interface/snesob.o CC arch-linux-c-opt/obj/snes/interface/noise/snesdnest.o CC arch-linux-c-opt/obj/snes/interface/f90-custom/zsnesf90.o CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snespcf.o CC arch-linux-c-opt/obj/snes/interface/ftn-auto/snesf.o CC arch-linux-c-opt/obj/snes/interface/noise/snesmfj2.o CC arch-linux-c-opt/obj/snes/interface/noise/snesnoise.o CC arch-linux-c-opt/obj/snes/interface/snesut.o CC arch-linux-c-opt/obj/snes/impls/qn/ftn-auto/qnf.o CC arch-linux-c-opt/obj/snes/utils/dmplexsnes.o CC arch-linux-c-opt/obj/snes/interface/ftn-custom/zsnesf.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasf.o CC arch-linux-c-opt/obj/snes/impls/fas/fasgalerkin.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasgalerkinf.o CC arch-linux-c-opt/obj/ksp/pc/impls/bddc/bddcprivate.o CC arch-linux-c-opt/obj/snes/impls/fas/ftn-auto/fasfuncf.o CC arch-linux-c-opt/obj/snes/impls/qn/qn.o CC arch-linux-c-opt/obj/snes/impls/ntrdc/ftn-auto/ntrdcf.o CC arch-linux-c-opt/obj/snes/impls/shell/snesshell.o CC arch-linux-c-opt/obj/snes/impls/shell/ftn-custom/zsnesshellf.o CC arch-linux-c-opt/obj/snes/impls/shell/ftn-auto/snesshellf.o CC arch-linux-c-opt/obj/snes/impls/fas/fasfunc.o CC arch-linux-c-opt/obj/snes/impls/richardson/snesrichardson.o CC arch-linux-c-opt/obj/snes/impls/composite/ftn-auto/snescompositef.o CC arch-linux-c-opt/obj/snes/impls/gs/ftn-auto/snesgsf.o CC arch-linux-c-opt/obj/snes/impls/ntrdc/ntrdc.o CC arch-linux-c-opt/obj/snes/impls/gs/gssecant.o CC arch-linux-c-opt/obj/snes/impls/gs/snesgs.o CC arch-linux-c-opt/obj/snes/impls/tr/ftn-auto/trf.o CC arch-linux-c-opt/obj/snes/impls/fas/fas.o CC arch-linux-c-opt/obj/snes/impls/vi/ss/ftn-auto/vissf.o CC arch-linux-c-opt/obj/snes/impls/vi/ftn-auto/vif.o CC arch-linux-c-opt/obj/snes/impls/patch/snespatch.o CC arch-linux-c-opt/obj/snes/impls/vi/rs/ftn-auto/virsf.o CC arch-linux-c-opt/obj/snes/impls/multiblock/ftn-auto/multiblockf.o CC arch-linux-c-opt/obj/snes/impls/ksponly/ksponly.o CC arch-linux-c-opt/obj/snes/impls/vi/ss/viss.o CC arch-linux-c-opt/obj/snes/impls/vi/vi.o CC arch-linux-c-opt/obj/snes/impls/tr/tr.o CC arch-linux-c-opt/obj/snes/impls/composite/snescomposite.o CC arch-linux-c-opt/obj/snes/impls/nasm/aspin.o CC arch-linux-c-opt/obj/snes/impls/vi/rs/virs.o CC arch-linux-c-opt/obj/snes/impls/nasm/ftn-auto/nasmf.o CC arch-linux-c-opt/obj/snes/impls/ngmres/ftn-auto/snesngmresf.o CC arch-linux-c-opt/obj/snes/impls/multiblock/multiblock.o CC arch-linux-c-opt/obj/snes/impls/ngmres/anderson.o CC arch-linux-c-opt/obj/snes/impls/python/ftn-custom/zpythonsf.o CC arch-linux-c-opt/obj/snes/impls/python/pythonsnes.o CC arch-linux-c-opt/obj/snes/impls/ngmres/ngmresfunc.o CC arch-linux-c-opt/obj/snes/interface/snes.o CC arch-linux-c-opt/obj/snes/impls/ncg/ftn-auto/snesncgf.o CC arch-linux-c-opt/obj/snes/impls/ngmres/snesngmres.o CC arch-linux-c-opt/obj/snes/impls/ls/ls.o CC arch-linux-c-opt/obj/snes/mf/ftn-auto/snesmfjf.o CC arch-linux-c-opt/obj/snes/mf/snesmfj.o CC arch-linux-c-opt/obj/snes/impls/ncg/snesncg.o CC arch-linux-c-opt/obj/snes/impls/nasm/nasm.o CC arch-linux-c-opt/obj/snes/impls/ms/ms.o CC arch-linux-c-opt/obj/ts/utils/dmnetworkts.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-custom/zlandaucreate.o CC arch-linux-c-opt/obj/ts/utils/dmdats.o CC arch-linux-c-opt/obj/ts/utils/dmlocalts.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/ftn-auto/plexlandf.o CC arch-linux-c-opt/obj/ts/event/ftn-auto/tseventf.o CC arch-linux-c-opt/obj/ts/utils/ftn-auto/dmplextsf.o CC arch-linux-c-opt/obj/ts/utils/dmplexts.o CC arch-linux-c-opt/obj/ts/utils/tsconvest.o CC arch-linux-c-opt/obj/ts/utils/dmts.o CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-custom/ztrajf.o CC arch-linux-c-opt/obj/ts/trajectory/interface/ftn-auto/trajf.o CC arch-linux-c-opt/obj/ts/trajectory/utils/reconstruct.o CC arch-linux-c-opt/obj/ts/trajectory/impls/singlefile/singlefile.o CC arch-linux-c-opt/obj/ts/trajectory/impls/visualization/trajvisualization.o CC arch-linux-c-opt/obj/ts/trajectory/impls/basic/trajbasic.o CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-custom/ztsadaptf.o CC arch-linux-c-opt/obj/ts/event/tsevent.o CC arch-linux-c-opt/obj/ts/adapt/interface/ftn-auto/tsadaptf.o CC arch-linux-c-opt/obj/ts/trajectory/interface/traj.o CC arch-linux-c-opt/obj/ts/adapt/impls/history/adapthist.o CC arch-linux-c-opt/obj/ts/adapt/impls/history/ftn-auto/adapthistf.o CC arch-linux-c-opt/obj/ts/adapt/impls/none/adaptnone.o CC arch-linux-c-opt/obj/ts/adapt/impls/glee/adaptglee.o CC arch-linux-c-opt/obj/ts/adapt/impls/basic/adaptbasic.o CC arch-linux-c-opt/obj/ts/adapt/impls/cfl/adaptcfl.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-custom/zadaptdspf.o CC arch-linux-c-opt/obj/ts/adapt/interface/tsadapt.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/ftn-auto/adaptdspf.o CC arch-linux-c-opt/obj/ts/interface/tscreate.o CC arch-linux-c-opt/obj/ts/adapt/impls/dsp/adaptdsp.o CC arch-linux-c-opt/obj/ts/interface/dlregists.o CC arch-linux-c-opt/obj/ts/trajectory/impls/memory/trajmemory.o CC arch-linux-c-opt/obj/ts/interface/tsreg.o CC arch-linux-c-opt/obj/ts/interface/tseig.o CC arch-linux-c-opt/obj/ts/interface/tshistory.o CC arch-linux-c-opt/obj/ts/interface/tsregall.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztscreatef.o CC arch-linux-c-opt/obj/ts/interface/tsrhssplit.o CC arch-linux-c-opt/obj/ts/interface/sensitivity/ftn-auto/tssenf.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsregf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-custom/zrkf.o CC arch-linux-c-opt/obj/ts/interface/ftn-custom/ztsf.o CC arch-linux-c-opt/obj/ts/interface/ftn-auto/tsf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/ftn-auto/rkf.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-custom/zsspf.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ftn-auto/sspf.o CC arch-linux-c-opt/obj/ts/impls/explicit/euler/euler.o CC arch-linux-c-opt/obj/ts/interface/sensitivity/tssen.o CC arch-linux-c-opt/obj/ts/interface/tsmon.o CC arch-linux-c-opt/obj/ts/impls/rosw/ftn-custom/zroswf.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/mrk.o CC arch-linux-c-opt/obj/ts/impls/explicit/ssp/ssp.o CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-auto/arkimexf.o CC arch-linux-c-opt/obj/ts/impls/arkimex/ftn-custom/zarkimexf.o CC arch-linux-c-opt/obj/ts/impls/pseudo/ftn-auto/posindepf.o CC arch-linux-c-opt/obj/ts/impls/pseudo/posindep.o CC arch-linux-c-opt/obj/ts/impls/python/pythonts.o CC arch-linux-c-opt/obj/ts/impls/symplectic/basicsymplectic/basicsymplectic.o CC arch-linux-c-opt/obj/ts/impls/explicit/rk/rk.o CC arch-linux-c-opt/obj/ts/impls/python/ftn-custom/zpythontf.o CC arch-linux-c-opt/obj/ts/impls/eimex/eimex.o CC arch-linux-c-opt/obj/ts/impls/implicit/theta/ftn-auto/thetaf.o CC arch-linux-c-opt/obj/ts/impls/mimex/mimex.o CC arch-linux-c-opt/obj/ts/impls/rosw/rosw.o CC arch-linux-c-opt/obj/ts/impls/glee/glee.o CC arch-linux-c-opt/obj/ts/interface/ts.o CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glleadapt.o CC arch-linux-c-opt/obj/ts/impls/arkimex/arkimex.o CC arch-linux-c-opt/obj/ts/impls/implicit/irk/irk.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha1f.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha1.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/ftn-auto/alpha2f.o CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/ftn-auto/tsdiscgradf.o CC arch-linux-c-opt/obj/ts/impls/bdf/ftn-auto/bdff.o CC arch-linux-c-opt/obj/ts/impls/implicit/alpha/alpha2.o CC arch-linux-c-opt/obj/ts/characteristic/interface/mocregis.o CC arch-linux-c-opt/obj/ts/characteristic/interface/ftn-auto/characteristicf.o CC arch-linux-c-opt/obj/ts/impls/implicit/discgrad/tsdiscgrad.o CC arch-linux-c-opt/obj/ts/characteristic/interface/slregis.o CC arch-linux-c-opt/obj/ts/impls/multirate/mprk.o CC arch-linux-c-opt/obj/ts/impls/implicit/theta/theta.o CC arch-linux-c-opt/obj/ts/characteristic/impls/da/slda.o CC arch-linux-c-opt/obj/ts/impls/bdf/bdf.o CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/ftn-auto/blmvmf.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnls/bqnls.o CC arch-linux-c-opt/obj/tao/bound/impls/blmvm/blmvm.o CC arch-linux-c-opt/obj/tao/bound/utils/isutil.o CC arch-linux-c-opt/obj/ts/utils/dmplexlandau/plexland.o CC arch-linux-c-opt/obj/tao/bound/impls/tron/tron.o CC arch-linux-c-opt/obj/ts/characteristic/interface/characteristic.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnls.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntl.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bntr.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnkls.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktl.o CC arch-linux-c-opt/obj/tao/pde_constrained/impls/lcl/lcl.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnk.o CC arch-linux-c-opt/obj/tao/bound/impls/bncg/bncg.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/bqnktr.o CC arch-linux-c-opt/obj/tao/bound/impls/bqnk/ftn-auto/bqnkf.o CC arch-linux-c-opt/obj/tao/shell/ftn-auto/taoshellf.o CC arch-linux-c-opt/obj/tao/shell/taoshell.o CC arch-linux-c-opt/obj/tao/matrix/submatfree.o CC arch-linux-c-opt/obj/tao/bound/impls/bnk/bnk.o CC arch-linux-c-opt/obj/tao/matrix/adamat.o CC arch-linux-c-opt/obj/tao/quadratic/impls/gpcg/gpcg.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/ftn-auto/almmutilsf.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almmutils.o CC arch-linux-c-opt/obj/tao/quadratic/impls/bqpip/bqpip.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-auto/admmf.o CC arch-linux-c-opt/obj/ts/impls/implicit/glle/glle.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/ftn-custom/zadmmf.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssls.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssfls.o CC arch-linux-c-opt/obj/tao/linesearch/interface/dlregis_taolinesearch.o CC arch-linux-c-opt/obj/tao/complementarity/impls/ssls/ssils.o CC arch-linux-c-opt/obj/tao/constrained/impls/almm/almm.o CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asfls.o CC arch-linux-c-opt/obj/tao/complementarity/impls/asls/asils.o CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-auto/taolinesearchf.o CC arch-linux-c-opt/obj/tao/linesearch/interface/ftn-custom/ztaolinesearchf.o CC arch-linux-c-opt/obj/tao/constrained/impls/admm/admm.o CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/ipm.o CC arch-linux-c-opt/obj/tao/linesearch/impls/gpcglinesearch/gpcglinesearch.o CC arch-linux-c-opt/obj/tao/linesearch/impls/unit/unit.o CC arch-linux-c-opt/obj/tao/linesearch/impls/morethuente/morethuente.o CC arch-linux-c-opt/obj/tao/snes/taosnes.o CC arch-linux-c-opt/obj/tao/linesearch/interface/taolinesearch.o CC arch-linux-c-opt/obj/tao/linesearch/impls/armijo/armijo.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-auto/brgnf.o CC arch-linux-c-opt/obj/tao/linesearch/impls/owarmijo/owarmijo.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/ftn-custom/zbrgnf.o CC arch-linux-c-opt/obj/tao/interface/dlregistao.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/gqt.o CC arch-linux-c-opt/obj/tao/interface/fdiff.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/brgn/brgn.o CC arch-linux-c-opt/obj/tao/interface/taosolver_bounds.o CC arch-linux-c-opt/obj/tao/interface/taosolverregi.o CC arch-linux-c-opt/obj/tao/constrained/impls/ipm/pdipm.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_boundsf.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_hjf.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolver_fgf.o CC arch-linux-c-opt/obj/tao/interface/taosolver_fg.o CC arch-linux-c-opt/obj/tao/python/pythontao.o CC arch-linux-c-opt/obj/tao/python/ftn-custom/zpythontaof.o CC arch-linux-c-opt/obj/tao/interface/taosolver_hj.o CC arch-linux-c-opt/obj/tao/interface/ftn-auto/taosolverf.o CC arch-linux-c-opt/obj/tao/interface/ftn-custom/ztaosolverf.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/lmvm/lmvm.o CC arch-linux-c-opt/obj/tao/interface/taosolver.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/owlqn/owlqn.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/neldermead/neldermead.o CC arch-linux-c-opt/obj/tao/util/ftn-auto/tao_utilf.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/cg/taocg.o FC arch-linux-c-opt/obj/sys/classes/bag/f2003-src/fsrc/bagenum.o FC arch-linux-c-opt/obj/sys/objects/f2003-src/fsrc/optionenum.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntr/ntr.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/ntl/ntl.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmswarmmod.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/bmrm/bmrm.o CC arch-linux-c-opt/obj/tao/unconstrained/impls/nls/nls.o CC arch-linux-c-opt/obj/tao/util/tao_util.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmdamod.o CC arch-linux-c-opt/obj/tao/leastsquares/impls/pounders/pounders.o FC arch-linux-c-opt/obj/dm/f90-mod/petscdmplexmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspdefmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petscpcmod.o FC arch-linux-c-opt/obj/ksp/f90-mod/petsckspmod.o FC arch-linux-c-opt/obj/snes/f90-mod/petscsnesmod.o FC arch-linux-c-opt/obj/ts/f90-mod/petsctsmod.o FC arch-linux-c-opt/obj/tao/f90-mod/petsctaomod.o CLINKER arch-linux-c-opt/lib/libpetsc.so.3.019.2 *** Building SLEPc *** Checking environment... done Checking PETSc installation... done Generating Fortran stubs... done Checking LAPACK library... done Checking SCALAPACK... done Writing various configuration files... done ================================================================================ SLEPc Configuration ================================================================================ SLEPc directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc It is a git repository on branch: remotes/origin/jose/test-petsc-branch~2 SLEPc prefix directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt PETSc directory: /home/vrkaka/SLlibs/petsc It is a git repository on branch: main Architecture "arch-linux-c-opt" with double precision real numbers SCALAPACK from SCALAPACK linked by PETSc xxx==========================================================================xxx Configure stage complete. Now build the SLEPc library with: make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt xxx==========================================================================xxx ========================================== Starting make run on WKS-101259-LT at Wed, 07 Jun 2023 13:20:55 +0300 Machine characteristics: Linux WKS-101259-LT 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using SLEPc directory: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc Using PETSc directory: /home/vrkaka/SLlibs/petsc Using PETSc arch: arch-linux-c-opt ----------------------------------------- SLEPC_VERSION_RELEASE 0 SLEPC_VERSION_MAJOR 3 SLEPC_VERSION_MINOR 19 SLEPC_VERSION_SUBMINOR 0 SLEPC_VERSION_DATE "unknown" SLEPC_VERSION_GIT "unknown" SLEPC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using SLEPc configure options: --prefix=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt Using SLEPc configuration flags: #define SLEPC_PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define SLEPC_PETSC_ARCH "arch-linux-c-opt" #define SLEPC_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc" #define SLEPC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define SLEPC_VERSION_GIT "v3.19.0-34-ga2e6dffce" #define SLEPC_VERSION_DATE_GIT "2023-05-09 07:30:59 +0000" #define SLEPC_VERSION_BRANCH_GIT "remotes/origin/jose/test-petsc-branch~2" #define SLEPC_HAVE_SCALAPACK 1 #define SLEPC_SCALAPACK_HAVE_UNDERSCORE 1 #define SLEPC_HAVE_PACKAGES ":scalapack:" ----------------------------------------- PETSC_VERSION_RELEASE 0 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 19 PETSC_VERSION_SUBMINOR 2 PETSC_VERSION_DATE "unknown" PETSC_VERSION_GIT "unknown" PETSC_VERSION_DATE_GIT "unknown" ----------------------------------------- Using PETSc configure options: --with-openmp --download-mpich --download-mumps --download-scalapack --download-openblas --download-slepc --download-metis --download-med --download-hdf5 --download-zlib --download-netcdf --download-pnetcdf --download-exodusii --with-scalar-type=real --with-debugging=0 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 Using PETSc configuration flags: #define PETSC_ARCH "arch-linux-c-opt" #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_FUNCTION(why) __attribute__((deprecated(why))) #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) #define PETSC_DEPRECATED_TYPEDEF(why) __attribute__((deprecated(why))) #define PETSC_DIR "/home/vrkaka/SLlibs/petsc" #define PETSC_DIR_SEPARATOR '/' #define PETSC_FORTRAN_CHARLEN_T size_t #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_ATTRIBUTEALIGNED 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_ATOMIC 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_CXX_DIALECT_CXX20 1 #define PETSC_HAVE_DLADDR 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_EXECUTABLE_EXPORT 1 #define PETSC_HAVE_EXODUSII 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_HDF5 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MED 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_METIS 1 #define PETSC_HAVE_MKSTEMP 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_MPICH 1 #define PETSC_HAVE_MPICH_NUMVERSION 40101300 #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_LARGE_COUNT 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_MUMPS 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_NETCDF 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_OPENBLAS 1 #define PETSC_HAVE_OPENMP 1 #define PETSC_HAVE_PACKAGES ":blaslapack:exodusii:hdf5:mathlib:med:metis:mpi:mpich:mumps:netcdf:openblas:openmp:pnetcdf:pthread:regex:scalapack:sowing:zlib:" #define PETSC_HAVE_PNETCDF 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_POSIX_MEMALIGN 1 #define PETSC_HAVE_PTHREAD 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_HAVE_REGEX 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_SCALAPACK 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SLEPC 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SOWING 1 #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_STDATOMIC_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HDF5_HAVE_PARALLEL 1 #define PETSC_HDF5_HAVE_ZLIB 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LIB_DIR "/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MPICC_SHOW "gcc -fPIC -Wno-lto-type-mismatch -Wno-stringop-overflow -O3 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath -Wl,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,--enable-new-dtags -lmpi" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_OMAKE "/usr/bin/gmake --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "so" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED __attribute((unused)) #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MALLOC_COALESCED 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC_VERSION_BRANCH_GIT "main" #define PETSC_VERSION_DATE_GIT "2023-06-07 04:13:28 +0000" #define PETSC_VERSION_GIT "v3.19.2-384-g9b9c8f2e245" #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 ----------------------------------------- Using C/C++ include paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Using C compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc -o .o -c -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using C++ compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -O3 -std=gnu++20 -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp Using Fortran include/module paths: -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include Using Fortran compile: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 -o .o -c -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 -fopenmp -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/include -I/home/vrkaka/SLlibs/petsc/include -I/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -fopenmp ----------------------------------------- Using C/C++ linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpicc Using C/C++ flags: -fopenmp -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -O3 Using Fortran linker: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpif90 Using Fortran flags: -fopenmp -Wall -ffree-line-length-none -ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3 ----------------------------------------- Using libraries: -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc/arch-linux-c-opt/lib -lslepc -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -L/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lopenblas -lmetis -lexoIIv2for32 -lexodus -lmedC -lmed -lnetcdf -lpnetcdf -lhdf5_hl -lhdf5 -lm -lz -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ ------------------------------------------ Using mpiexec: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/bin/mpiexec ------------------------------------------ Using MAKE: /usr/bin/gmake Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc ========================================== /usr/bin/gmake --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= slepc_libs /usr/bin/python3 /home/vrkaka/SLlibs/petsc/config/gmakegen.py --petsc-arch=arch-linux-c-opt --pkg-dir=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc --pkg-name=slepc --pkg-pkgs=sys,eps,svd,pep,nep,mfn,lme --pkg-arch=arch-linux-c-opt CC arch-linux-c-opt/obj/sys/ftn-auto/slepcscf.o CC arch-linux-c-opt/obj/sys/ftn-auto/slepcinitf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_startf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepc_start.o CC arch-linux-c-opt/obj/sys/dlregisslepc.o CC arch-linux-c-opt/obj/sys/slepcutil.o CC arch-linux-c-opt/obj/sys/slepcinit.o CC arch-linux-c-opt/obj/sys/slepcsc.o CC arch-linux-c-opt/obj/sys/slepccontour.o Use "/usr/bin/gmake V=1" to see verbose compile lines, "/usr/bin/gmake V=0" to suppress. FC arch-linux-c-opt/obj/sys/f90-mod/slepcsysmod.o CC arch-linux-c-opt/obj/sys/vec/ftn-auto/vecutilf.o CC arch-linux-c-opt/obj/sys/ftn-custom/zslepcutil.o CC arch-linux-c-opt/obj/sys/vec/pool.o CC arch-linux-c-opt/obj/sys/mat/ftn-auto/matutilf.o CC arch-linux-c-opt/obj/sys/vec/vecutil.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-custom/zpolygon.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/ftn-auto/rgpolygonf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/ftn-auto/rgringf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-custom/zellipse.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/ftn-auto/rgellipsef.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ellipse/rgellipse.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-custom/zinterval.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/ftn-auto/rgintervalf.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/ring/rgring.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgregis.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/polygon/rgpolygon.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-auto/rgbasicf.o CC arch-linux-c-opt/obj/sys/mat/matutil.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/ftn-custom/zrgf.o CC arch-linux-c-opt/obj/sys/classes/rg/interface/rgbasic.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/ftn-auto/fnphif.o CC arch-linux-c-opt/obj/sys/classes/rg/impls/interval/rginterval.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/ftn-auto/fncombinef.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/phi/fnphi.o CC arch-linux-c-opt/obj/sys/vec/veccomp.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/ftn-custom/zrational.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/sqrt/fnsqrt.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/fnutil.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/combine/fncombine.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/log/fnlog.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnregis.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-auto/fnbasicf.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/ftn-custom/zfnf.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/invsqrt/fninvsqrt.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/rational/fnrational.o CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/ftn-auto/cayleyf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/ftn-auto/precondf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/cayley/cayley.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/ftn-auto/filterf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/precond/precond.o CC arch-linux-c-opt/obj/sys/classes/st/impls/sinvert/sinvert.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filter.o CC arch-linux-c-opt/obj/sys/classes/fn/interface/fnbasic.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shift/shift.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/shell.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-auto/shellf.o CC arch-linux-c-opt/obj/sys/classes/st/impls/shell/ftn-custom/zshell.o CC arch-linux-c-opt/obj/sys/classes/fn/impls/exp/fnexp.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stregis.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsetf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stset.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stfuncf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-custom/zstf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stshellmat.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stslesf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stfunc.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stsles.o CC arch-linux-c-opt/obj/sys/classes/st/interface/ftn-auto/stsolvef.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/ftn-auto/bvtensorf.o CC arch-linux-c-opt/obj/sys/classes/st/interface/stsolve.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/contiguous/contig.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbiorthog.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/mat/bvmat.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvblas.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/svec/svec.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/vecs/vecs.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvkrylov.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvfunc.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvregis.o CC arch-linux-c-opt/obj/sys/classes/bv/impls/tensor/bvtensor.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvbasic.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvcontour.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-custom/zbvf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbiorthogf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvbasicf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvcontourf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvfuncf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvglobalf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvkrylovf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvopsf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/ftn-auto/bvorthogf.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvops.o CC arch-linux-c-opt/obj/sys/classes/st/impls/filter/filtlan.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvglobal.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvlapack.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/ftn-auto/dshsvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/ftn-auto/dssvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/dsutil.o CC arch-linux-c-opt/obj/sys/classes/bv/interface/bvorthog.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-auto/dspepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/ftn-custom/zdspepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/ftn-auto/dsnepf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghep/dsghep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhepts/dsnhepts.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/svd/dssvd.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gnhep/dsgnhep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/pep/dspep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nhep/dsnhep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hsvd/dshsvd.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/nep/dsnep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/hz.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dmerg2.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dlaed3m.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/ftn-auto/dsgsvdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsbtdc.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dsrtdf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/bdc/dibtdc.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsbasicf.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsbasic.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-custom/zdsf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/invit.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsopsf.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dsops.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/ftn-auto/dsprivf.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/hep/dshep.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/ghiep/dsghiep.o CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/ftn-auto/lobpcgf.o CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/ftn-auto/rqcgf.o CC arch-linux-c-opt/obj/eps/impls/lyapii/ftn-auto/lyapiif.o CC arch-linux-c-opt/obj/sys/classes/ds/interface/dspriv.o CC arch-linux-c-opt/obj/sys/classes/ds/impls/gsvd/dsgsvd.o CC arch-linux-c-opt/obj/eps/impls/subspace/subspace.o CC arch-linux-c-opt/obj/eps/impls/external/scalapack/scalapack.o CC arch-linux-c-opt/obj/eps/impls/lapack/lapack.o CC arch-linux-c-opt/obj/eps/impls/ciss/ftn-auto/cissf.o CC arch-linux-c-opt/obj/eps/impls/cg/rqcg/rqcg.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdschm.o CC arch-linux-c-opt/obj/eps/impls/cg/lobpcg/lobpcg.o CC arch-linux-c-opt/obj/eps/impls/davidson/davidson.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdtestconv.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdinitv.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdgd2.o CC arch-linux-c-opt/obj/eps/impls/lyapii/lyapii.o CC arch-linux-c-opt/obj/eps/impls/davidson/jd/ftn-auto/jdf.o CC arch-linux-c-opt/obj/eps/impls/davidson/gd/ftn-auto/gdf.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdcalcpairs.o CC arch-linux-c-opt/obj/eps/impls/davidson/gd/gd.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdutils.o CC arch-linux-c-opt/obj/eps/impls/davidson/jd/jd.o CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/ftn-auto/lanczosf.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdupdatev.o CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/ftn-auto/arnoldif.o CC arch-linux-c-opt/obj/eps/impls/krylov/arnoldi/arnoldi.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-indef.o CC arch-linux-c-opt/obj/eps/impls/krylov/epskrylov.o CC arch-linux-c-opt/obj/eps/impls/davidson/dvdimprovex.o CC arch-linux-c-opt/obj/eps/impls/ciss/ciss.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-custom/zkrylovschurf.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.o CC arch-linux-c-opt/obj/eps/impls/power/ftn-auto/powerf.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-twosided.o CC arch-linux-c-opt/obj/eps/interface/dlregiseps.o CC arch-linux-c-opt/obj/eps/interface/epsbasic.o CC arch-linux-c-opt/obj/eps/interface/epsregis.o CC arch-linux-c-opt/obj/eps/impls/krylov/lanczos/lanczos.o CC arch-linux-c-opt/obj/eps/interface/epsdefault.o CC arch-linux-c-opt/obj/eps/interface/epsmon.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/krylovschur.o CC arch-linux-c-opt/obj/eps/interface/epsopts.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsbasicf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsdefaultf.o CC arch-linux-c-opt/obj/eps/interface/epssetup.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsmonf.o CC arch-linux-c-opt/obj/eps/impls/power/power.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssetupf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsviewf.o CC arch-linux-c-opt/obj/eps/interface/epssolve.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epsoptsf.o CC arch-linux-c-opt/obj/eps/interface/ftn-auto/epssolvef.o CC arch-linux-c-opt/obj/eps/interface/ftn-custom/zepsf.o CC arch-linux-c-opt/obj/svd/impls/lanczos/ftn-auto/gklanczosf.o CC arch-linux-c-opt/obj/svd/impls/cross/ftn-auto/crossf.o CC arch-linux-c-opt/obj/eps/interface/epsview.o CC arch-linux-c-opt/obj/svd/impls/external/scalapack/svdscalap.o CC arch-linux-c-opt/obj/svd/impls/randomized/rsvd.o CC arch-linux-c-opt/obj/svd/impls/trlanczos/ftn-auto/trlanczosf.o CC arch-linux-c-opt/obj/svd/impls/cyclic/ftn-auto/cyclicf.o CC arch-linux-c-opt/obj/svd/interface/dlregissvd.o CC arch-linux-c-opt/obj/svd/interface/svdbasic.o CC arch-linux-c-opt/obj/svd/impls/lapack/svdlapack.o CC arch-linux-c-opt/obj/svd/impls/lanczos/gklanczos.o CC arch-linux-c-opt/obj/eps/impls/krylov/krylovschur/ks-slice.o CC arch-linux-c-opt/obj/svd/interface/svddefault.o CC arch-linux-c-opt/obj/svd/impls/cross/cross.o CC arch-linux-c-opt/obj/svd/interface/svdregis.o CC arch-linux-c-opt/obj/svd/interface/svdmon.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdbasicf.o CC arch-linux-c-opt/obj/svd/interface/svdopts.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svddefaultf.o CC arch-linux-c-opt/obj/svd/interface/svdsetup.o CC arch-linux-c-opt/obj/svd/interface/svdsolve.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdmonf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdoptsf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsetupf.o CC arch-linux-c-opt/obj/svd/interface/ftn-custom/zsvdf.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdsolvef.o CC arch-linux-c-opt/obj/svd/interface/svdview.o CC arch-linux-c-opt/obj/svd/interface/ftn-auto/svdviewf.o CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/ftn-auto/qarnoldif.o CC arch-linux-c-opt/obj/pep/impls/peputils.o CC arch-linux-c-opt/obj/svd/impls/cyclic/cyclic.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/qslicef.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-custom/zstoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/pepkrylov.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/ftn-auto/stoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ftn-auto/ptoarf.o CC arch-linux-c-opt/obj/pep/impls/krylov/qarnoldi/qarnoldi.o CC arch-linux-c-opt/obj/pep/impls/linear/ftn-auto/linearf.o CC arch-linux-c-opt/obj/pep/impls/linear/qeplin.o CC arch-linux-c-opt/obj/pep/impls/jd/ftn-auto/pjdf.o CC arch-linux-c-opt/obj/pep/interface/dlregispep.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/stoar.o CC arch-linux-c-opt/obj/pep/interface/pepbasic.o CC arch-linux-c-opt/obj/pep/interface/pepmon.o CC arch-linux-c-opt/obj/pep/impls/linear/linear.o CC arch-linux-c-opt/obj/pep/interface/pepdefault.o CC arch-linux-c-opt/obj/svd/impls/trlanczos/trlanczos.o CC arch-linux-c-opt/obj/pep/interface/pepregis.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/ptoar.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepbasicf.o CC arch-linux-c-opt/obj/pep/interface/pepopts.o CC arch-linux-c-opt/obj/pep/interface/pepsetup.o CC arch-linux-c-opt/obj/pep/interface/pepsolve.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepdefaultf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepmonf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepoptsf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsetupf.o CC arch-linux-c-opt/obj/pep/interface/ftn-custom/zpepf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepviewf.o CC arch-linux-c-opt/obj/pep/interface/ftn-auto/pepsolvef.o CC arch-linux-c-opt/obj/pep/interface/peprefine.o CC arch-linux-c-opt/obj/pep/interface/pepview.o CC arch-linux-c-opt/obj/pep/impls/krylov/stoar/qslice.o CC arch-linux-c-opt/obj/nep/impls/slp/ftn-auto/slpf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-custom/znleigsf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigs-fullbf.o CC arch-linux-c-opt/obj/nep/impls/nleigs/ftn-auto/nleigsf.o CC arch-linux-c-opt/obj/nep/impls/interpol/ftn-auto/interpolf.o CC arch-linux-c-opt/obj/nep/impls/slp/slp.o CC arch-linux-c-opt/obj/nep/impls/narnoldi/ftn-auto/narnoldif.o CC arch-linux-c-opt/obj/nep/impls/slp/slp-twosided.o CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs-fullb.o CC arch-linux-c-opt/obj/nep/impls/interpol/interpol.o CC arch-linux-c-opt/obj/nep/impls/rii/ftn-auto/riif.o CC arch-linux-c-opt/obj/nep/interface/dlregisnep.o CC arch-linux-c-opt/obj/nep/impls/narnoldi/narnoldi.o CC arch-linux-c-opt/obj/pep/impls/krylov/toar/nrefine.o CC arch-linux-c-opt/obj/nep/interface/nepdefault.o CC arch-linux-c-opt/obj/nep/interface/nepregis.o CC arch-linux-c-opt/obj/nep/impls/rii/rii.o CC arch-linux-c-opt/obj/nep/interface/nepbasic.o CC arch-linux-c-opt/obj/nep/interface/nepmon.o CC arch-linux-c-opt/obj/pep/impls/jd/pjd.o CC arch-linux-c-opt/obj/nep/interface/nepresolv.o CC arch-linux-c-opt/obj/nep/interface/nepopts.o CC arch-linux-c-opt/obj/nep/impls/nepdefl.o CC arch-linux-c-opt/obj/nep/interface/nepsetup.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepdefaultf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepbasicf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepmonf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepoptsf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepresolvf.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsetupf.o CC arch-linux-c-opt/obj/nep/interface/nepsolve.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepsolvef.o CC arch-linux-c-opt/obj/nep/interface/ftn-auto/nepviewf.o CC arch-linux-c-opt/obj/nep/interface/ftn-custom/znepf.o CC arch-linux-c-opt/obj/mfn/interface/dlregismfn.o CC arch-linux-c-opt/obj/mfn/impls/krylov/mfnkrylov.o CC arch-linux-c-opt/obj/nep/interface/nepview.o CC arch-linux-c-opt/obj/nep/interface/neprefine.o CC arch-linux-c-opt/obj/mfn/interface/mfnmon.o CC arch-linux-c-opt/obj/mfn/interface/mfnregis.o CC arch-linux-c-opt/obj/mfn/impls/expokit/mfnexpokit.o CC arch-linux-c-opt/obj/mfn/interface/mfnopts.o CC arch-linux-c-opt/obj/mfn/interface/mfnbasic.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnbasicf.o CC arch-linux-c-opt/obj/mfn/interface/mfnsolve.o CC arch-linux-c-opt/obj/mfn/interface/mfnsetup.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnmonf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnoptsf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsetupf.o CC arch-linux-c-opt/obj/mfn/interface/ftn-auto/mfnsolvef.o CC arch-linux-c-opt/obj/mfn/interface/ftn-custom/zmfnf.o CC arch-linux-c-opt/obj/lme/interface/dlregislme.o CC arch-linux-c-opt/obj/nep/impls/nleigs/nleigs.o CC arch-linux-c-opt/obj/lme/interface/lmeregis.o CC arch-linux-c-opt/obj/lme/interface/lmemon.o CC arch-linux-c-opt/obj/lme/impls/krylov/lmekrylov.o CC arch-linux-c-opt/obj/lme/interface/lmebasic.o CC arch-linux-c-opt/obj/lme/interface/lmeopts.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmemonf.o CC arch-linux-c-opt/obj/lme/interface/lmesetup.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmebasicf.o CC arch-linux-c-opt/obj/lme/interface/lmesolve.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmeoptsf.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesolvef.o CC arch-linux-c-opt/obj/lme/interface/lmedense.o CC arch-linux-c-opt/obj/lme/interface/ftn-auto/lmesetupf.o CC arch-linux-c-opt/obj/lme/interface/ftn-custom/zlmef.o FC arch-linux-c-opt/obj/sys/classes/rg/f90-mod/slepcrgmod.o FC arch-linux-c-opt/obj/sys/classes/bv/f90-mod/slepcbvmod.o FC arch-linux-c-opt/obj/sys/classes/fn/f90-mod/slepcfnmod.o FC arch-linux-c-opt/obj/lme/f90-mod/slepclmemod.o FC arch-linux-c-opt/obj/sys/classes/ds/f90-mod/slepcdsmod.o FC arch-linux-c-opt/obj/sys/classes/st/f90-mod/slepcstmod.o FC arch-linux-c-opt/obj/mfn/f90-mod/slepcmfnmod.o FC arch-linux-c-opt/obj/eps/f90-mod/slepcepsmod.o FC arch-linux-c-opt/obj/svd/f90-mod/slepcsvdmod.o FC arch-linux-c-opt/obj/pep/f90-mod/slepcpepmod.o FC arch-linux-c-opt/obj/nep/f90-mod/slepcnepmod.o CLINKER arch-linux-c-opt/lib/libslepc.so.3.019.0 Now to install the library do: make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc PETSC_DIR=/home/vrkaka/SLlibs/petsc install ========================================= *** Installing SLEPc *** *** Installing SLEPc at prefix location: /home/vrkaka/SLlibs/petsc/arch-linux-c-opt *** ==================================== Install complete. Now to check if the libraries are working do (in current directory): make SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check ==================================== /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc install-builtafterslepc /usr/bin/gmake --no-print-directory -f makefile PETSC_ARCH=arch-linux-c-opt PETSC_DIR=/home/vrkaka/SLlibs/petsc SLEPC_DIR=/home/vrkaka/SLlibs/petsc/arch-linux-c-opt/externalpackages/git.slepc slepc4py-install gmake[6]: Nothing to be done for 'slepc4py-install'. ========================================= Now to check if the libraries are working do: make PETSC_DIR=/home/vrkaka/SLlibs/petsc PETSC_ARCH=arch-linux-c-opt check ========================================= and here is the cmake message when configuring the project: vrkaka at WKS-101259-LT:~/sparselizardipopt/build$ cmake .. -- The CXX compiler identification is GNU 11.3.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- MPI headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- MPI library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmpich.so -- GMSH HEADERS NOT FOUND (OPTIONAL) -- GMSH LIBRARY NOT FOUND (OPTIONAL) -- Ipopt headers found at /home/vrkaka/Ipopt/installation/include/coin-or -- Ipopt library found at /home/vrkaka/Ipopt/installation/lib/libipopt.so -- Blas header cblas.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Blas library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libopenblas.so -- Metis headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Metis library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libmetis.so -- Mumps headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Mumps library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libcmumps.a -- Petsc header petsc.h found at /home/vrkaka/SLlibs/petsc/include -- Petsc header petscconf.h found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Petsc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libpetsc.so -- Slepc headers found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/include -- Slepc library found at /home/vrkaka/SLlibs/petsc/arch-linux-c-opt/lib/libslepc.so -- Configuring done -- Generating done -- Build files have been written to: /home/vrkaka/sparselizardipopt/build After that building the project with cmake goes fine and a simple mpi test works -Kalle -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 9 07:59:09 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 9 Jun 2023 08:59:09 -0400 Subject: [petsc-users] Behavior of KSP iterations when using Restart In-Reply-To: <2ef44089a03a470da5269097e9b6cf7f@tum.de> References: <2ef44089a03a470da5269097e9b6cf7f@tum.de> Message-ID: On Fri, Jun 9, 2023 at 3:09?AM Nicolas Garcia Guzman wrote: > Hi, sorry I will attach the logs of a run using 17 restart iterations and > one using 19. Testing using other values like 100, the true residual norm > does diverge. Is there any recommendation for this case? The linear system > comes from a DGFEM discretization in FEniCS of a hyperbolic system of PDEs. > The system is non-SPD, but the equations have been non-dimensionalized. > 1) The problem is a breakdown in GMRES, indicated by DIVERGED_BREAKDOWN. We can get specifics of the breakdown by adding the flag -info ksp 2) Neither solver is actually solving this system. We recommend first using a (possibly slow) solver that solves the system, and then step-by-step backing off it until you are happy with the performance. Here I think we should first try accurate solves of the subdomain: -sub_pc_type lu Thanks, Matt > ------------------------------ > *De:* Matthew Knepley > *Enviado:* viernes, 9 de junio de 2023 4:13:35 > *Para:* Nicolas Garcia Guzman > *Cc:* petsc-users at mcs.anl.gov > *Asunto:* Re: [petsc-users] Behavior of KSP iterations when using Restart > > On Thu, Jun 8, 2023 at 9:13?PM Nicolas Garcia Guzman < > nicolas.garcia at tum.de> wrote: > >> Hello, >> >> >> I am solving a linear system using petsc4py, with the following command: >> >> >> python main.py -ksp_type gmres -ksp_gmres_restart 16 -ksp_max_it 180000 >> -ksp_monitor -ksp_converged_reason -ksp_rtol 1e-15 -pc_type asm >> -sub_pc_type ilu -sub_pc_factor_levels 1 -sub_ksp_type preonly >> >> >> In the script all I do is import the libraries, load the linear system, >> set options and solve. >> >> >> However, simply changing the restart parameter to something like 20, will >> make the solution iterate to exactly 2 times the restart parameter, and >> then say the iteration diverged. This happens every time no matter the >> parameter chosen, unless it's 16 or less. Is this expected behavior or is >> the problem coming from my linear system? >> > > 1) Always send the complete output > > 2) I am guessing that the iteration stops because the restarted residual > is too large.This could be due to your linear system. What kind of system > are you solving? > > Thanks, > > Matt > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.garcia at tum.de Fri Jun 9 02:09:40 2023 From: nicolas.garcia at tum.de (Nicolas Garcia Guzman) Date: Fri, 9 Jun 2023 07:09:40 +0000 Subject: [petsc-users] Behavior of KSP iterations when using Restart In-Reply-To: References: , Message-ID: <2ef44089a03a470da5269097e9b6cf7f@tum.de> Hi, sorry I will attach the logs of a run using 17 restart iterations and one using 19. Testing using other values like 100, the true residual norm does diverge. Is there any recommendation for this case? The linear system comes from a DGFEM discretization in FEniCS of a hyperbolic system of PDEs. The system is non-SPD, but the equations have been non-dimensionalized. ________________________________ De: Matthew Knepley Enviado: viernes, 9 de junio de 2023 4:13:35 Para: Nicolas Garcia Guzman Cc: petsc-users at mcs.anl.gov Asunto: Re: [petsc-users] Behavior of KSP iterations when using Restart On Thu, Jun 8, 2023 at 9:13?PM Nicolas Garcia Guzman > wrote: Hello, I am solving a linear system using petsc4py, with the following command: python main.py -ksp_type gmres -ksp_gmres_restart 16 -ksp_max_it 180000 -ksp_monitor -ksp_converged_reason -ksp_rtol 1e-15 -pc_type asm -sub_pc_type ilu -sub_pc_factor_levels 1 -sub_ksp_type preonly In the script all I do is import the libraries, load the linear system, set options and solve. However, simply changing the restart parameter to something like 20, will make the solution iterate to exactly 2 times the restart parameter, and then say the iteration diverged. This happens every time no matter the parameter chosen, unless it's 16 or less. Is this expected behavior or is the problem coming from my linear system? 1) Always send the complete output 2) I am guessing that the iteration stops because the restarted residual is too large.This could be due to your linear system. What kind of system are you solving? Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: restart17.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: restart19.txt URL: From balay at mcs.anl.gov Fri Jun 9 08:19:05 2023 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 9 Jun 2023 08:19:05 -0500 (CDT) Subject: [petsc-users] PETSc downloading older version of OpenBLAS In-Reply-To: References: Message-ID: <4c1cd3ee-feaf-8999-5260-abf9e62d2de3@mcs.anl.gov> Yeah --download-openblas-commit option is a modifier to --download-openblas - so both options should be specified.. There is a proposal to automatically enable --download-openblas when --download-openblas-commit (or other similar modifier) option is specified - but its not in petsc yet.. Satish On Fri, 9 Jun 2023, Kalle Karhap?? (TAU) wrote: > I managed to download a compatible openBLAS with conf > > ./configure --download-openblas --download-openblas-commit='0b678b19dc03f2a999d6e038814c4c50b9640a4e' --with-openmp --with-mpi=0 --with-shared-libraries=1 --with-mumps-serial=1 --download-mumps --download-metis --download-slepc --with-debugging=0 --with-scalar-type=real --with-x=0 COPTFLAGS='-O3' CXXOPTFLAGS='-O3' FOPTFLAGS='-O3'; > > --download-openblas-commit= didn't work by itself but with -download-openblas it did > > case closed! > > -Kalle > From: Kalle Karhap?? (TAU) > Sent: perjantai 9. kes?kuuta 2023 8.47 > To: petsc-users at mcs.anl.gov > Subject: PETSc downloading older version of OpenBLAS > > Hi all > > > During install I'm checking out an older version of PETSc (v3.17.1-512-g27c9ef7be8) but running into problems with -download-openblas in configure. > > I suspect the newest version of OpenBLAS that is being downloaded from git is incompatible with this older version of petsc > > > Is there a way to have petsc -download an older version of openblas (eg. v0.3.20) ? > > > Thanks > > -Kalle > From mfadams at lbl.gov Fri Jun 9 10:02:01 2023 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 9 Jun 2023 11:02:01 -0400 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: Message-ID: An IS is just an array of integers. We need your context. Is this question for sparse matrices? If so look at the documentation on the AIJ matrix construction and the global vertex numbering system. Mark On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: > Hi, > > I have an IS that contains some vertex that is in natural numbering. How > do I map them to global numbering without being distributed? > > Thanks, > Josh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 9 10:42:38 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 9 Jun 2023 11:42:38 -0400 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: Message-ID: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> You might be looking for https://petsc.org/release/manualpages/AO/AO/#ao > On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: > > An IS is just an array of integers. We need your context. > Is this question for sparse matrices? If so look at the documentation on the AIJ matrix construction and the global vertex numbering system. > > Mark > > On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo > wrote: >> Hi, >> >> I have an IS that contains some vertex that is in natural numbering. How do I map them to global numbering without being distributed? >> >> Thanks, >> Josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Fri Jun 9 11:01:45 2023 From: liufield at gmail.com (neil liu) Date: Fri, 9 Jun 2023 12:01:45 -0400 Subject: [petsc-users] Inquiry about the definitely lost memory Message-ID: Dear Petsc developers, I am using valgrind to check the memory leak. It shows, [image: image.png] Finally, I found that DMPlexrestoretrasitiveclosure can resolve this memory leak. My question is from the above screen shot, it seems the leak is related to MPI. How can I relate that reminder to DMPlexrestoretrasitiveclosure ? Thanks, Xiaodong -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 52278 bytes Desc: not available URL: From knepley at gmail.com Fri Jun 9 11:37:55 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 9 Jun 2023 12:37:55 -0400 Subject: [petsc-users] Inquiry about the definitely lost memory In-Reply-To: References: Message-ID: On Fri, Jun 9, 2023 at 12:04?PM neil liu wrote: > Dear Petsc developers, > > I am using valgrind to check the memory leak. It shows, > [image: image.png] > Finally, I found that DMPlexrestoretrasitiveclosure can resolve this > memory leak. > > My question is from the above screen shot, it seems the leak is related to > MPI. How can I relate that reminder to DMPlexrestoretrasitiveclosure ? > This is a knock-on leak related to the first one. You will also see the original leak. Since it is inside DM, it will come from DMGetWorkArray(). THanks, Matt > Thanks, > > Xiaodong > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 52278 bytes Desc: not available URL: From bsmith at petsc.dev Fri Jun 9 11:38:14 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 9 Jun 2023 12:38:14 -0400 Subject: [petsc-users] Inquiry about the definitely lost memory In-Reply-To: References: Message-ID: <25F21AB9-EE13-456B-A929-24C6CE14F421@petsc.dev> This are MPI objects PETSc creates in PetscInitialize(), for successful runs they should all be removed in PETSc finalize, hence they should not appear as valgrind links. Are you sure PetscFinalize() is called and completes? We'll need the exact PETSc version you are using to know exactly which MPI object is not being destroyed. Barry > On Jun 9, 2023, at 12:01 PM, neil liu wrote: > > Dear Petsc developers, > > I am using valgrind to check the memory leak. It shows, > > Finally, I found that DMPlexrestoretrasitiveclosure can resolve this memory leak. > > My question is from the above screen shot, it seems the leak is related to MPI. How can I relate that reminder to DMPlexrestoretrasitiveclosure ? > > Thanks, > > Xiaodong From ysjosh.lo at gmail.com Fri Jun 9 12:45:40 2023 From: ysjosh.lo at gmail.com (YuSh Lo) Date: Fri, 9 Jun 2023 12:45:40 -0500 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> References: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> Message-ID: Hi Barry, Is there any way to use the mapping generated by DMPlexDistribute along with AO? Thanks, Josh Barry Smith ? 2023?6?9? ?? ??10:42??? > > You might be looking for https://petsc.org/release/manualpages/AO/AO/#ao > > > On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: > > An IS is just an array of integers. We need your context. > Is this question for sparse matrices? If so look at the documentation on > the AIJ matrix construction and the global vertex numbering system. > > Mark > > On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: > >> Hi, >> >> I have an IS that contains some vertex that is in natural numbering. How >> do I map them to global numbering without being distributed? >> >> Thanks, >> Josh >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 9 13:04:05 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 9 Jun 2023 14:04:05 -0400 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> Message-ID: On Fri, Jun 9, 2023 at 1:46?PM YuSh Lo wrote: > Hi Barry, > > Is there any way to use the mapping generated by DMPlexDistribute along > with AO? > For Plex, if you turn on https://petsc.org/main/manualpages/DM/DMSetUseNatural/ before DMPlexDistribute(), it will compute and store a GlobalToNatural map. This can be used to map vectors back and forth, but you can extract the SF DMPlexGetGlobalToNaturalSF and use that to remap your IS, by extracting the indices. THanks, Matt > Thanks, > Josh > > > Barry Smith ? 2023?6?9? ?? ??10:42??? > >> >> You might be looking for >> https://petsc.org/release/manualpages/AO/AO/#ao >> >> >> On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: >> >> An IS is just an array of integers. We need your context. >> Is this question for sparse matrices? If so look at the documentation on >> the AIJ matrix construction and the global vertex numbering system. >> >> Mark >> >> On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: >> >>> Hi, >>> >>> I have an IS that contains some vertex that is in natural numbering. How >>> do I map them to global numbering without being distributed? >>> >>> Thanks, >>> Josh >>> >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jiannan_Tu at uml.edu Fri Jun 9 14:24:49 2023 From: Jiannan_Tu at uml.edu (Tu, Jiannan) Date: Fri, 9 Jun 2023 19:24:49 +0000 Subject: [petsc-users] Unconditional jump or move depends on uninitialised value(s) In-Reply-To: <1E04F37F-2286-404D-AB87-9FDD5E878DC4@petsc.dev> References: <295E7E1A-1649-435F-AE65-F061F287513F@petsc.dev> <3F1BD989-8516-4649-A385-5F94FD1A9470@petsc.dev> <0B7BA32F-03CE-44F2-A9A3-4584B2D7AB94@anl.gov> <9523EDF9-7C02-4872-9E0E-1DFCBCB28066@anl.gov> <88895B56-2BEF-49BF-B6E9-F75186E712D9@anl.gov> <1E04F37F-2286-404D-AB87-9FDD5E878DC4@petsc.dev> Message-ID: To avoid warninigs about ?unconditional jump or move depends on uninitialized value(s)?, I simplified my programs so that the warnings remain only through SNESSolve(). Some of them are appended below. My question is if such warnings really matter and affect the simulation results? Thank you for your kind advice. Jiannan ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0xA19178C: sqrt (w_sqrt_compat.c:31) ==1988731== by 0x4EA9E4C: VecNorm_Seq (bvec2.c:227) ==1988731== by 0x4F705C8: VecNorm (rvector.c:228) ==1988731== by 0x6678D91: SNESSolve_NEWTONTR (tr.c:300) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) ==1988731== by 0x122FA7: main (iditm3d.cpp:138) ==1988731== ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0x667495C: PetscIsInfOrNanReal (petscmath.h:788) ==1988731== by 0x6678E0C: SNESSolve_NEWTONTR (tr.c:301) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) ==1988731== by 0x122FA7: main (iditm3d.cpp:138) ==1988731== ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0xA347078: __printf_fp_l (printf_fp.c:387) ==1988731== by 0xA360547: printf_positional (vfprintf-internal.c:2072) ==1988731== by 0xA361DCC: __vfprintf_internal (vfprintf-internal.c:1733) ==1988731== by 0xA376F99: __vsnprintf_internal (vsnprintf.c:114) ==1988731== by 0x4A77784: PetscVSNPrintf (mprint.c:176) ==1988731== by 0x4A77EF4: PetscVFPrintfDefault (mprint.c:292) ==1988731== by 0x4BCBB92: PetscViewerASCIIPrintf (filev.c:607) ==1988731== by 0x674764B: SNESMonitorDefault (snesut.c:296) ==1988731== by 0x6728A3E: SNESMonitor (snes.c:4059) ==1988731== by 0x6679743: SNESSolve_NEWTONTR (tr.c:309) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) ==1988731== by 0x122FA7: main (iditm3d.cpp:138) ==1988731== ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0xA342B34: __mpn_extract_double (dbl2mpn.c:56) ==1988731== by 0xA34738E: __printf_fp_l (printf_fp.c:387) ==1988731== by 0xA360547: printf_positional (vfprintf-internal.c:2072) ==1988731== by 0xA361DCC: __vfprintf_internal (vfprintf-internal.c:1733) ==1988731== by 0xA376F99: __vsnprintf_internal (vsnprintf.c:114) ==1988731== by 0x4A77784: PetscVSNPrintf (mprint.c:176) ==1988731== by 0x4A77EF4: PetscVFPrintfDefault (mprint.c:292) ==1988731== by 0x4BCBB92: PetscViewerASCIIPrintf (filev.c:607) ==1988731== by 0x674764B: SNESMonitorDefault (snesut.c:296) ==1988731== by 0x6728A3E: SNESMonitor (snes.c:4059) ==1988731== by 0x6679743: SNESSolve_NEWTONTR (tr.c:309) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 9 14:35:30 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 9 Jun 2023 15:35:30 -0400 Subject: [petsc-users] Unconditional jump or move depends on uninitialised value(s) In-Reply-To: References: <295E7E1A-1649-435F-AE65-F061F287513F@petsc.dev> <3F1BD989-8516-4649-A385-5F94FD1A9470@petsc.dev> <0B7BA32F-03CE-44F2-A9A3-4584B2D7AB94@anl.gov> <9523EDF9-7C02-4872-9E0E-1DFCBCB28066@anl.gov> <88895B56-2BEF-49BF-B6E9-F75186E712D9@anl.gov> <1E04F37F-2286-404D-AB87-9FDD5E878DC4@petsc.dev> Message-ID: <89D8D36D-72A1-4A58-B565-FFFADA945A64@petsc.dev> Yes they matter; they cannot be ignored. The most likely cause is that you did not fill in certain entries in the vector during the function evaluation routine you set with SNESSetFunction(). Another possibility is if you provided a right hand side vector to the SNES (uncommon) that not all values in that vector have been filled in. If you call VecView() inside your function evaluation routine immediately after you have finished filling the vector it may help track down which entries you missed. Barry > On Jun 9, 2023, at 3:24 PM, Tu, Jiannan wrote: > > To avoid warninigs about ?unconditional jump or move depends on uninitialized value(s)?, I simplified my programs so that the warnings remain only through SNESSolve(). Some of them are appended below. My question is if such warnings really matter and affect the simulation results? <> > > Thank you for your kind advice. > > Jiannan > > ==1988731== Conditional jump or move depends on uninitialised value(s) > ==1988731== at 0xA19178C: sqrt (w_sqrt_compat.c:31) > ==1988731== by 0x4EA9E4C: VecNorm_Seq (bvec2.c:227) > ==1988731== by 0x4F705C8: VecNorm (rvector.c:228) > ==1988731== by 0x6678D91: SNESSolve_NEWTONTR (tr.c:300) > ==1988731== by 0x673093F: SNESSolve (snes.c:4809) > ==1988731== by 0x122FA7: main (iditm3d.cpp:138) > ==1988731== > ==1988731== Conditional jump or move depends on uninitialised value(s) > ==1988731== at 0x667495C: PetscIsInfOrNanReal (petscmath.h:788) > ==1988731== by 0x6678E0C: SNESSolve_NEWTONTR (tr.c:301) > ==1988731== by 0x673093F: SNESSolve (snes.c:4809) > ==1988731== by 0x122FA7: main (iditm3d.cpp:138) > ==1988731== > ==1988731== Conditional jump or move depends on uninitialised value(s) > ==1988731== at 0xA347078: __printf_fp_l (printf_fp.c:387) > ==1988731== by 0xA360547: printf_positional (vfprintf-internal.c:2072) > ==1988731== by 0xA361DCC: __vfprintf_internal (vfprintf-internal.c:1733) > ==1988731== by 0xA376F99: __vsnprintf_internal (vsnprintf.c:114) > ==1988731== by 0x4A77784: PetscVSNPrintf (mprint.c:176) > ==1988731== by 0x4A77EF4: PetscVFPrintfDefault (mprint.c:292) > ==1988731== by 0x4BCBB92: PetscViewerASCIIPrintf (filev.c:607) > ==1988731== by 0x674764B: SNESMonitorDefault (snesut.c:296) > ==1988731== by 0x6728A3E: SNESMonitor (snes.c:4059) > ==1988731== by 0x6679743: SNESSolve_NEWTONTR (tr.c:309) > ==1988731== by 0x673093F: SNESSolve (snes.c:4809) > ==1988731== by 0x122FA7: main (iditm3d.cpp:138) > ==1988731== > ==1988731== Conditional jump or move depends on uninitialised value(s) > ==1988731== at 0xA342B34: __mpn_extract_double (dbl2mpn.c:56) > ==1988731== by 0xA34738E: __printf_fp_l (printf_fp.c:387) > ==1988731== by 0xA360547: printf_positional (vfprintf-internal.c:2072) > ==1988731== by 0xA361DCC: __vfprintf_internal (vfprintf-internal.c:1733) > ==1988731== by 0xA376F99: __vsnprintf_internal (vsnprintf.c:114) > ==1988731== by 0x4A77784: PetscVSNPrintf (mprint.c:176) > ==1988731== by 0x4A77EF4: PetscVFPrintfDefault (mprint.c:292) > ==1988731== by 0x4BCBB92: PetscViewerASCIIPrintf (filev.c:607) > ==1988731== by 0x674764B: SNESMonitorDefault (snesut.c:296) > ==1988731== by 0x6728A3E: SNESMonitor (snes.c:4059) > ==1988731== by 0x6679743: SNESSolve_NEWTONTR (tr.c:309) > ==1988731== by 0x673093F: SNESSolve (snes.c:4809) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.fai at gmail.com Fri Jun 9 14:55:27 2023 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Fri, 9 Jun 2023 14:55:27 -0500 Subject: [petsc-users] Unconditional jump or move depends on uninitialised value(s) In-Reply-To: References: Message-ID: <8D5EF829-F1A5-4FBE-BF6C-15100DF7D4EF@gmail.com> > I'll see what gcc complier options I should use to detect the use of declared variables that are not initialized The flag you are looking for is -Wall. Please fix all the warnings that it may raise. Generally speaking, it is also much easier to ensure your variables are initialized if you restrict them to the smallest scope possible. That is, declare the variable immediately before you use it. If you only use it within a loop, then only declare it inside that loop. If you only use it in an if branch, then only declare it inside that branch. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) > On May 10, 2023, at 12:38, Tu, Jiannan wrote: > > I'll see what gcc complier options I should use to detect the use of declared variables that are not initialized -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jiannan_Tu at uml.edu Fri Jun 9 15:27:55 2023 From: Jiannan_Tu at uml.edu (Tu, Jiannan) Date: Fri, 9 Jun 2023 20:27:55 +0000 Subject: [petsc-users] Unconditional jump or move depends on uninitialised value(s) In-Reply-To: <89D8D36D-72A1-4A58-B565-FFFADA945A64@petsc.dev> References: <295E7E1A-1649-435F-AE65-F061F287513F@petsc.dev> <3F1BD989-8516-4649-A385-5F94FD1A9470@petsc.dev> <0B7BA32F-03CE-44F2-A9A3-4584B2D7AB94@anl.gov> <9523EDF9-7C02-4872-9E0E-1DFCBCB28066@anl.gov> <88895B56-2BEF-49BF-B6E9-F75186E712D9@anl.gov> <1E04F37F-2286-404D-AB87-9FDD5E878DC4@petsc.dev> <89D8D36D-72A1-4A58-B565-FFFADA945A64@petsc.dev> Message-ID: Barry, Thank you for the very good advice. I think all the elements have been filled but there may be some on the boundaries missed. Jiannan ________________________________ From: Barry Smith Sent: Friday, June 9, 2023 3:35 PM To: Tu, Jiannan Cc: petsc-users at mcs.anl.gov ; Zhang, Hong Subject: Re: [petsc-users]Unconditional jump or move depends on uninitialised value(s) CAUTION: This email was sent from outside the UMass Lowell network. Yes they matter; they cannot be ignored. The most likely cause is that you did not fill in certain entries in the vector during the function evaluation routine you set with SNESSetFunction(). Another possibility is if you provided a right hand side vector to the SNES (uncommon) that not all values in that vector have been filled in. If you call VecView() inside your function evaluation routine immediately after you have finished filling the vector it may help track down which entries you missed. Barry On Jun 9, 2023, at 3:24 PM, Tu, Jiannan wrote: To avoid warninigs about ?unconditional jump or move depends on uninitialized value(s)?, I simplified my programs so that the warnings remain only through SNESSolve(). Some of them are appended below. My question is if such warnings really matter and affect the simulation results? Thank you for your kind advice. Jiannan ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0xA19178C: sqrt (w_sqrt_compat.c:31) ==1988731== by 0x4EA9E4C: VecNorm_Seq (bvec2.c:227) ==1988731== by 0x4F705C8: VecNorm (rvector.c:228) ==1988731== by 0x6678D91: SNESSolve_NEWTONTR (tr.c:300) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) ==1988731== by 0x122FA7: main (iditm3d.cpp:138) ==1988731== ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0x667495C: PetscIsInfOrNanReal (petscmath.h:788) ==1988731== by 0x6678E0C: SNESSolve_NEWTONTR (tr.c:301) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) ==1988731== by 0x122FA7: main (iditm3d.cpp:138) ==1988731== ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0xA347078: __printf_fp_l (printf_fp.c:387) ==1988731== by 0xA360547: printf_positional (vfprintf-internal.c:2072) ==1988731== by 0xA361DCC: __vfprintf_internal (vfprintf-internal.c:1733) ==1988731== by 0xA376F99: __vsnprintf_internal (vsnprintf.c:114) ==1988731== by 0x4A77784: PetscVSNPrintf (mprint.c:176) ==1988731== by 0x4A77EF4: PetscVFPrintfDefault (mprint.c:292) ==1988731== by 0x4BCBB92: PetscViewerASCIIPrintf (filev.c:607) ==1988731== by 0x674764B: SNESMonitorDefault (snesut.c:296) ==1988731== by 0x6728A3E: SNESMonitor (snes.c:4059) ==1988731== by 0x6679743: SNESSolve_NEWTONTR (tr.c:309) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) ==1988731== by 0x122FA7: main (iditm3d.cpp:138) ==1988731== ==1988731== Conditional jump or move depends on uninitialised value(s) ==1988731== at 0xA342B34: __mpn_extract_double (dbl2mpn.c:56) ==1988731== by 0xA34738E: __printf_fp_l (printf_fp.c:387) ==1988731== by 0xA360547: printf_positional (vfprintf-internal.c:2072) ==1988731== by 0xA361DCC: __vfprintf_internal (vfprintf-internal.c:1733) ==1988731== by 0xA376F99: __vsnprintf_internal (vsnprintf.c:114) ==1988731== by 0x4A77784: PetscVSNPrintf (mprint.c:176) ==1988731== by 0x4A77EF4: PetscVFPrintfDefault (mprint.c:292) ==1988731== by 0x4BCBB92: PetscViewerASCIIPrintf (filev.c:607) ==1988731== by 0x674764B: SNESMonitorDefault (snesut.c:296) ==1988731== by 0x6728A3E: SNESMonitor (snes.c:4059) ==1988731== by 0x6679743: SNESSolve_NEWTONTR (tr.c:309) ==1988731== by 0x673093F: SNESSolve (snes.c:4809) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jiannan_Tu at uml.edu Fri Jun 9 15:31:38 2023 From: Jiannan_Tu at uml.edu (Tu, Jiannan) Date: Fri, 9 Jun 2023 20:31:38 +0000 Subject: [petsc-users] Unconditional jump or move depends on uninitialised value(s) In-Reply-To: <8D5EF829-F1A5-4FBE-BF6C-15100DF7D4EF@gmail.com> References: <8D5EF829-F1A5-4FBE-BF6C-15100DF7D4EF@gmail.com> Message-ID: Jacob, Thank you very much. I did use -Wall compiler option. Nevertheless, declaring the variables in the smallest scope is a very good way to avoid uninitialized usage of the variables. Jiannan ________________________________ From: Jacob Faibussowitsch Sent: Friday, June 9, 2023 3:55 PM To: Tu, Jiannan Cc: Barry Smith ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Unconditional jump or move depends on uninitialised value(s) You don't often get email from jacob.fai at gmail.com. Learn why this is important CAUTION: This email was sent from outside the UMass Lowell network. > I'll see what gcc complier options I should use to detect the use of declared variables that are not initialized The flag you are looking for is -Wall. Please fix all the warnings that it may raise. Generally speaking, it is also much easier to ensure your variables are initialized if you restrict them to the smallest scope possible. That is, declare the variable immediately before you use it. If you only use it within a loop, then only declare it inside that loop. If you only use it in an if branch, then only declare it inside that branch. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) On May 10, 2023, at 12:38, Tu, Jiannan wrote: I'll see what gcc complier options I should use to detect the use of declared variables that are not initialized -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Fri Jun 9 15:33:29 2023 From: liufield at gmail.com (neil liu) Date: Fri, 9 Jun 2023 16:33:29 -0400 Subject: [petsc-users] Inquiry about the definitely lost memory In-Reply-To: <25F21AB9-EE13-456B-A929-24C6CE14F421@petsc.dev> References: <25F21AB9-EE13-456B-A929-24C6CE14F421@petsc.dev> Message-ID: Thanks a lot, Matt and Barry. Indeed, I found the original leak that will lead to something related to DMGetWorkArray. ==15547== 50,000 bytes in 1 blocks are definitely lost in loss record 2,786 of 2,791 ==15547== at 0x4C37135: malloc (vg_replace_malloc.c:381) ==15547== by 0x9BE4E43: MPL_malloc (mpl_trmem.h:373) ==15547== by 0x9BE6B3B: PMIU_cmd_add_int (pmi_wire.c:538) ==15547== by 0x9BEB7C6: PMIU_msg_set_query_abort (pmi_msg.c:322) ==15547== by 0x9BE16F0: PMI_Abort (pmi_v1.c:327) ==15547== by 0x9A8E3E7: MPIR_pmi_abort (mpir_pmi.c:243) ==15547== by 0x9B20BC7: MPID_Abort (mpid_abort.c:67) ==15547== by 0x9A22823: MPIR_Abort_impl (init_impl.c:270) ==15547== by 0x97FFF02: internal_Abort (abort.c:65) ==15547== by 0x98000C3: PMPI_Abort (abort.c:112) ==15547== by 0x58FE116: PetscError (err.c:403) ==15547== by 0x410CA3: main (ex1.c:764) //Call DMDestroy(); On Fri, Jun 9, 2023 at 12:38?PM Barry Smith wrote: > > This are MPI objects PETSc creates in PetscInitialize(), for successful > runs they should all be removed in PETSc finalize, hence they should not > appear as valgrind links. > > Are you sure PetscFinalize() is called and completes? > > We'll need the exact PETSc version you are using to know exactly which > MPI object is not being destroyed. > > > Barry > > > > On Jun 9, 2023, at 12:01 PM, neil liu wrote: > > > > Dear Petsc developers, > > > > I am using valgrind to check the memory leak. It shows, > > > > Finally, I found that DMPlexrestoretrasitiveclosure can resolve this > memory leak. > > > > My question is from the above screen shot, it seems the leak is related > to MPI. How can I relate that reminder to DMPlexrestoretrasitiveclosure ? > > > > Thanks, > > > > Xiaodong > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 9 16:36:39 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 9 Jun 2023 17:36:39 -0400 Subject: [petsc-users] Inquiry about the definitely lost memory In-Reply-To: References: <25F21AB9-EE13-456B-A929-24C6CE14F421@petsc.dev> Message-ID: <2A67BB0D-B1B4-432C-B189-1291F37DA90D@petsc.dev> If the program is not completely successful, that is it terminates early due to an error condition, we do not attempt to recover all the memory and resources. > On Jun 9, 2023, at 4:33 PM, neil liu wrote: > > Thanks a lot, Matt and Barry. > > Indeed, I found the original leak that will lead to something related to DMGetWorkArray. > ==15547== 50,000 bytes in 1 blocks are definitely lost in loss record 2,786 of 2,791 > ==15547== at 0x4C37135: malloc (vg_replace_malloc.c:381) > ==15547== by 0x9BE4E43: MPL_malloc (mpl_trmem.h:373) > ==15547== by 0x9BE6B3B: PMIU_cmd_add_int (pmi_wire.c:538) > ==15547== by 0x9BEB7C6: PMIU_msg_set_query_abort (pmi_msg.c:322) > ==15547== by 0x9BE16F0: PMI_Abort (pmi_v1.c:327) > ==15547== by 0x9A8E3E7: MPIR_pmi_abort (mpir_pmi.c:243) > ==15547== by 0x9B20BC7: MPID_Abort (mpid_abort.c:67) > ==15547== by 0x9A22823: MPIR_Abort_impl (init_impl.c:270) > ==15547== by 0x97FFF02: internal_Abort (abort.c:65) > ==15547== by 0x98000C3: PMPI_Abort (abort.c:112) > ==15547== by 0x58FE116: PetscError (err.c:403) > ==15547== by 0x410CA3: main (ex1.c:764) //Call DMDestroy(); > > On Fri, Jun 9, 2023 at 12:38?PM Barry Smith > wrote: >> >> This are MPI objects PETSc creates in PetscInitialize(), for successful runs they should all be removed in PETSc finalize, hence they should not appear as valgrind links. >> >> Are you sure PetscFinalize() is called and completes? >> >> We'll need the exact PETSc version you are using to know exactly which MPI object is not being destroyed. >> >> >> Barry >> >> >> > On Jun 9, 2023, at 12:01 PM, neil liu > wrote: >> > >> > Dear Petsc developers, >> > >> > I am using valgrind to check the memory leak. It shows, >> > >> > Finally, I found that DMPlexrestoretrasitiveclosure can resolve this memory leak. >> > >> > My question is from the above screen shot, it seems the leak is related to MPI. How can I relate that reminder to DMPlexrestoretrasitiveclosure ? >> > >> > Thanks, >> > >> > Xiaodong >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From junming.duan at epfl.ch Mon Jun 12 05:12:59 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Mon, 12 Jun 2023 10:12:59 +0000 Subject: [petsc-users] dm_view of high-order geometry/solution Message-ID: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> Dear all, I am playing with dm/impls/plex/tests/ex33.c and know how to set high-order geometry. Is it possible to output the final mesh to vtu, e.g. annulus example? Thanks! Junming -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jun 12 06:34:24 2023 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Jun 2023 05:34:24 -0600 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> Message-ID: On Mon, Jun 12, 2023 at 4:13?AM Duan Junming via petsc-users < petsc-users at mcs.anl.gov> wrote: > Dear all, > > > I am playing with dm/impls/plex/tests/ex33.c and know how to set > high-order geometry. > > Is it possible to output the final mesh to vtu, e.g. annulus example? > The problem is that VTK has no nice, standard way to talk about higher order polynomials. Thus, the best way to do this is to call DMRefine() once or twice and output that mesh. Thanks, Matt > Thanks! > > Junming > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junming.duan at epfl.ch Mon Jun 12 07:01:55 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Mon, 12 Jun 2023 12:01:55 +0000 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch>, Message-ID: <80d4226154f940ef814ba34fe4815970@epfl.ch> Dear Matt, Thank you for the reply. I have a more specific question about the spectral element example. Do you have any suggestions that how to write all the nodes in each cell to .vtu? Thanks! Junming ________________________________ From: knepley at gmail.com Sent: Monday, June 12, 2023 1:34:24 PM To: Duan Junming Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] dm_view of high-order geometry/solution On Mon, Jun 12, 2023 at 4:13?AM Duan Junming via petsc-users > wrote: Dear all, I am playing with dm/impls/plex/tests/ex33.c and know how to set high-order geometry. Is it possible to output the final mesh to vtu, e.g. annulus example? The problem is that VTK has no nice, standard way to talk about higher order polynomials. Thus, the best way to do this is to call DMRefine() once or twice and output that mesh. Thanks, Matt Thanks! Junming -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jun 12 07:06:54 2023 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Jun 2023 06:06:54 -0600 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: <80d4226154f940ef814ba34fe4815970@epfl.ch> References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> <80d4226154f940ef814ba34fe4815970@epfl.ch> Message-ID: On Mon, Jun 12, 2023 at 6:01?AM Duan Junming wrote: > Dear Matt, > > Thank you for the reply. I have a more specific question about the > spectral element example. Do you have any suggestions that how to write > all the nodes in each cell to .vtu? > It is the same procedure. VTU is not a great format for this. It wants everything at first order. Thanks, Matt > Thanks! > > Junming > ------------------------------ > *From:* knepley at gmail.com > *Sent:* Monday, June 12, 2023 1:34:24 PM > *To:* Duan Junming > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] dm_view of high-order geometry/solution > > On Mon, Jun 12, 2023 at 4:13?AM Duan Junming via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Dear all, >> >> >> I am playing with dm/impls/plex/tests/ex33.c and know how to set >> high-order geometry. >> >> Is it possible to output the final mesh to vtu, e.g. annulus example? >> > > The problem is that VTK has no nice, standard way to talk about higher > order polynomials. Thus, the > best way to do this is to call DMRefine() once or twice and output that > mesh. > > Thanks, > > Matt > > >> Thanks! >> >> Junming >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephan.koehler at math.tu-freiberg.de Mon Jun 12 07:24:31 2023 From: stephan.koehler at math.tu-freiberg.de (=?UTF-8?Q?Stephan_K=c3=b6hler?=) Date: Mon, 12 Jun 2023 14:24:31 +0200 Subject: [petsc-users] Bug Report TaoALMM class Message-ID: Dear PETSc/Tao team, I think there might be a bug in the Tao ALMM class:? In the function TaoALMMComputeAugLagAndGradient_Private(), see, eg. https://petsc.org/release/src/tao/constrained/impls/almm/almm.c.html#TAOALMM line 648 the gradient seems to be wrong. The given function and gradient computation is Lc = F + Ye^TCe + Yi^T(Ci - S) + 0.5*mu*[Ce^TCe + (Ci - S)^T(Ci - S)], dLc/dX = dF/dX + Ye^TAe + Yi^TAi + 0.5*mu*[Ce^TAe + (Ci - S)^TAi], but I think the gradient should be (without 0.5) dLc/dX = dF/dX + Ye^TAe + Yi^TAi + mu*[Ce^TAe + (Ci - S)^TAi]. Kind regards, Stephan K?hler -- Stephan K?hler TU Bergakademie Freiberg Institut f?r numerische Mathematik und Optimierung Akademiestra?e 6 09599 Freiberg Geb?udeteil Mittelbau, Zimmer 2.07 Telefon: +49 (0)3731 39-3173 (B?ro) -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0xC9BF2C20DFE9F713.asc Type: application/pgp-keys Size: 758 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 236 bytes Desc: OpenPGP digital signature URL: From jed at jedbrown.org Mon Jun 12 07:52:48 2023 From: jed at jedbrown.org (Jed Brown) Date: Mon, 12 Jun 2023 06:52:48 -0600 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> <80d4226154f940ef814ba34fe4815970@epfl.ch> Message-ID: <87mt14vo4f.fsf@jedbrown.org> Matthew Knepley writes: > On Mon, Jun 12, 2023 at 6:01?AM Duan Junming wrote: > >> Dear Matt, >> >> Thank you for the reply. I have a more specific question about the >> spectral element example. Do you have any suggestions that how to write >> all the nodes in each cell to .vtu? >> > It is the same procedure. VTU is not a great format for this. It wants > everything at first order. I would recommend configuring with --download-cgns and running with -dm_view cgns:output.cgns. This format has efficient parallel IO and curved elements work for moderate order in Paraview. From junming.duan at epfl.ch Mon Jun 12 08:38:35 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Mon, 12 Jun 2023 13:38:35 +0000 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: <87mt14vo4f.fsf@jedbrown.org> References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> <80d4226154f940ef814ba34fe4815970@epfl.ch> , <87mt14vo4f.fsf@jedbrown.org> Message-ID: Dear Jed, Thank you for the suggestion. When I run tests/ex33.c with ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 3 -dm_refine 1 -dm_view cgns:test.cgns and load it using Paraview, the mesh is still with straight lines. Should I modify the code to make cgns work? Or any other examples for me to start? Thanks! Junming ________________________________ From: Jed Brown Sent: Monday, June 12, 2023 2:52:48 PM To: Matthew Knepley; Duan Junming Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] dm_view of high-order geometry/solution Matthew Knepley writes: > On Mon, Jun 12, 2023 at 6:01?AM Duan Junming wrote: > >> Dear Matt, >> >> Thank you for the reply. I have a more specific question about the >> spectral element example. Do you have any suggestions that how to write >> all the nodes in each cell to .vtu? >> > It is the same procedure. VTU is not a great format for this. It wants > everything at first order. I would recommend configuring with --download-cgns and running with -dm_view cgns:output.cgns. This format has efficient parallel IO and curved elements work for moderate order in Paraview. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Jun 12 10:53:39 2023 From: jed at jedbrown.org (Jed Brown) Date: Mon, 12 Jun 2023 09:53:39 -0600 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> <80d4226154f940ef814ba34fe4815970@epfl.ch> <87mt14vo4f.fsf@jedbrown.org> Message-ID: <87h6rcvfr0.fsf@jedbrown.org> Duan Junming writes: > Dear Jed, > > > Thank you for the suggestion. > > When I run tests/ex33.c with > > ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 3 -dm_refine 1 -dm_view cgns:test.cgns > > and load it using Paraview, > > the mesh is still with straight lines. Ah, the viewer is keyed on the field (since the CGNS as supported by Paraview specifies coordinates and fields in the same space). That doesn't exist in your case. If you apply this patch and add `-petscspace_degre 3` to your command, you'll see that high order information is present. Paraview doesn't render as curves in all views, but it has the data. diff --git i/src/dm/impls/plex/tests/ex33.c w/src/dm/impls/plex/tests/ex33.c index 803095bc082..590facfa4f4 100644 --- i/src/dm/impls/plex/tests/ex33.c +++ w/src/dm/impls/plex/tests/ex33.c @@ -198,7 +198,6 @@ PetscErrorCode CreateMesh(MPI_Comm comm, AppCtx *ctx, DM *dm) default: SETERRQ(comm, PETSC_ERR_ARG_OUTOFRANGE, "Unknown mesh transform %d", ctx->meshTransform); } - PetscCall(DMViewFromOptions(*dm, NULL, "-dm_view")); PetscFunctionReturn(PETSC_SUCCESS); } @@ -227,6 +226,7 @@ static PetscErrorCode CreateDiscretization(DM dm, AppCtx *ctx) PetscCall(DMCreateDS(dm)); PetscCall(DMGetDS(dm, &ds)); PetscCall(PetscDSSetObjective(ds, 0, volume)); + PetscCall(DMViewFromOptions(dm, NULL, "-dm_view")); PetscFunctionReturn(PETSC_SUCCESS); } I can update the viewer to handle the degenerate case of no field (all my models have fields). From jed at jedbrown.org Mon Jun 12 12:07:04 2023 From: jed at jedbrown.org (Jed Brown) Date: Mon, 12 Jun 2023 11:07:04 -0600 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: <87h6rcvfr0.fsf@jedbrown.org> References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> <80d4226154f940ef814ba34fe4815970@epfl.ch> <87mt14vo4f.fsf@jedbrown.org> <87h6rcvfr0.fsf@jedbrown.org> Message-ID: <87edmgvccn.fsf@jedbrown.org> And here's an MR to do what you want without any code/arg changes. https://gitlab.com/petsc/petsc/-/merge_requests/6588 Jed Brown writes: > Duan Junming writes: > >> Dear Jed, >> >> >> Thank you for the suggestion. >> >> When I run tests/ex33.c with >> >> ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 3 -dm_refine 1 -dm_view cgns:test.cgns >> >> and load it using Paraview, >> >> the mesh is still with straight lines. > > Ah, the viewer is keyed on the field (since the CGNS as supported by Paraview specifies coordinates and fields in the same space). That doesn't exist in your case. If you apply this patch and add `-petscspace_degre 3` to your command, you'll see that high order information is present. Paraview doesn't render as curves in all views, but it has the data. > > diff --git i/src/dm/impls/plex/tests/ex33.c w/src/dm/impls/plex/tests/ex33.c > index 803095bc082..590facfa4f4 100644 > --- i/src/dm/impls/plex/tests/ex33.c > +++ w/src/dm/impls/plex/tests/ex33.c > @@ -198,7 +198,6 @@ PetscErrorCode CreateMesh(MPI_Comm comm, AppCtx *ctx, DM *dm) > default: > SETERRQ(comm, PETSC_ERR_ARG_OUTOFRANGE, "Unknown mesh transform %d", ctx->meshTransform); > } > - PetscCall(DMViewFromOptions(*dm, NULL, "-dm_view")); > PetscFunctionReturn(PETSC_SUCCESS); > } > > @@ -227,6 +226,7 @@ static PetscErrorCode CreateDiscretization(DM dm, AppCtx *ctx) > PetscCall(DMCreateDS(dm)); > PetscCall(DMGetDS(dm, &ds)); > PetscCall(PetscDSSetObjective(ds, 0, volume)); > + PetscCall(DMViewFromOptions(dm, NULL, "-dm_view")); > PetscFunctionReturn(PETSC_SUCCESS); > } > > > I can update the viewer to handle the degenerate case of no field (all my models have fields). From junming.duan at epfl.ch Mon Jun 12 13:26:05 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Mon, 12 Jun 2023 18:26:05 +0000 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: <87edmgvccn.fsf@jedbrown.org> References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> <80d4226154f940ef814ba34fe4815970@epfl.ch> <87mt14vo4f.fsf@jedbrown.org> <87h6rcvfr0.fsf@jedbrown.org>,<87edmgvccn.fsf@jedbrown.org> Message-ID: <36a02b79201643128c8b699215ce3b67@epfl.ch> Dear Jed, Thank you for your help! Now I moved the line using "DMViewFromOptions" after the function "PetscDSSetObjective", and it works for "-dm_coord_petscspace_degree 3 -petscspace_degree 3". But when I tried degree 4: ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 4 -petscspace_degree 4 -dm_refine 1 -dm_view cgns:test.cgns Paraview gives an empty render. Using degree 5: ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 5 -petscspace_degree 5 -dm_refine 1 -dm_view cgns:test.cgns it reports: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Cell type quadrilateral with closure size 36 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.19.2, unknown [0]PETSC ERROR: ./ex33 on a arch-darwin-c-debug named JunmingMacBook-Pro.local by Junming Mon Jun 12 20:23:04 2023 [0]PETSC ERROR: Configure options --download-cgns --download-hdf5 --download-openmpi --download-triangle --with-fc=0 PETSC_ARCH=arch-darwin-c-debug --download-cgns [0]PETSC ERROR: #1 DMPlexCGNSGetPermutation_Internal() at /Users/Junming/Packages/petsc/src/dm/impls/plex/cgns/plexcgns2.c:533 [0]PETSC ERROR: #2 DMView_PlexCGNS() at /Users/Junming/Packages/petsc/src/dm/impls/plex/cgns/plexcgns2.c:769 [0]PETSC ERROR: #3 DMView_Plex() at /Users/Junming/Packages/petsc/src/dm/impls/plex/plex.c:1801 [0]PETSC ERROR: #4 DMView() at /Users/Junming/Packages/petsc/src/dm/interface/dm.c:996 [0]PETSC ERROR: #5 PetscObjectView() at /Users/Junming/Packages/petsc/src/sys/objects/destroy.c:78 [0]PETSC ERROR: #6 PetscObjectViewFromOptions() at /Users/Junming/Packages/petsc/src/sys/objects/destroy.c:128 [0]PETSC ERROR: #7 DMViewFromOptions() at /Users/Junming/Packages/petsc/src/dm/interface/dm.c:940 [0]PETSC ERROR: #8 CreateDiscretization() at ex33.c:232 [0]PETSC ERROR: #9 main() at ex33.c:263 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -dm_coord_petscspace_degree 5 (source: command line) [0]PETSC ERROR: -dm_coord_space 0 (source: command line) [0]PETSC ERROR: -dm_plex_box_faces 1,1 (source: command line) [0]PETSC ERROR: -dm_plex_simplex 0 (source: command line) [0]PETSC ERROR: -dm_refine 1 (source: command line) [0]PETSC ERROR: -dm_view cgns:test.cgns (source: command line) [0]PETSC ERROR: -mesh_transform annulus (source: command line) [0]PETSC ERROR: -petscspace_degree 5 (source: command line) [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF with errorcode 56. Does cgns work for degree >= 4? Junming ________________________________ From: Jed Brown Sent: Monday, June 12, 2023 19:07 To: Duan Junming; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] dm_view of high-order geometry/solution And here's an MR to do what you want without any code/arg changes. https://gitlab.com/petsc/petsc/-/merge_requests/6588 Jed Brown writes: > Duan Junming writes: > >> Dear Jed, >> >> >> Thank you for the suggestion. >> >> When I run tests/ex33.c with >> >> ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 3 -dm_refine 1 -dm_view cgns:test.cgns >> >> and load it using Paraview, >> >> the mesh is still with straight lines. > > Ah, the viewer is keyed on the field (since the CGNS as supported by Paraview specifies coordinates and fields in the same space). That doesn't exist in your case. If you apply this patch and add `-petscspace_degre 3` to your command, you'll see that high order information is present. Paraview doesn't render as curves in all views, but it has the data. > > diff --git i/src/dm/impls/plex/tests/ex33.c w/src/dm/impls/plex/tests/ex33.c > index 803095bc082..590facfa4f4 100644 > --- i/src/dm/impls/plex/tests/ex33.c > +++ w/src/dm/impls/plex/tests/ex33.c > @@ -198,7 +198,6 @@ PetscErrorCode CreateMesh(MPI_Comm comm, AppCtx *ctx, DM *dm) > default: > SETERRQ(comm, PETSC_ERR_ARG_OUTOFRANGE, "Unknown mesh transform %d", ctx->meshTransform); > } > - PetscCall(DMViewFromOptions(*dm, NULL, "-dm_view")); > PetscFunctionReturn(PETSC_SUCCESS); > } > > @@ -227,6 +226,7 @@ static PetscErrorCode CreateDiscretization(DM dm, AppCtx *ctx) > PetscCall(DMCreateDS(dm)); > PetscCall(DMGetDS(dm, &ds)); > PetscCall(PetscDSSetObjective(ds, 0, volume)); > + PetscCall(DMViewFromOptions(dm, NULL, "-dm_view")); > PetscFunctionReturn(PETSC_SUCCESS); > } > > > I can update the viewer to handle the degenerate case of no field (all my models have fields). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Jun 12 13:54:21 2023 From: jed at jedbrown.org (Jed Brown) Date: Mon, 12 Jun 2023 12:54:21 -0600 Subject: [petsc-users] dm_view of high-order geometry/solution In-Reply-To: <36a02b79201643128c8b699215ce3b67@epfl.ch> References: <9e7bbf98cd774290ab470d723cc7ef2b@epfl.ch> <80d4226154f940ef814ba34fe4815970@epfl.ch> <87mt14vo4f.fsf@jedbrown.org> <87h6rcvfr0.fsf@jedbrown.org> <87edmgvccn.fsf@jedbrown.org> <36a02b79201643128c8b699215ce3b67@epfl.ch> Message-ID: <87bkhkv7du.fsf@jedbrown.org> CGNS supports fourth order and it's coded in PETSc, but Paraview hasn't implemented reading it yet. I think it would not be much work for someone (maybe you) to add it to Paraview. I have lots of applications on cubics, but not much beyond that so it hasn't risen to top priority for me. There is an accepted extension, but the CGNS implementation is still in a branch. https://cgns.github.io/ProposedExtensions/CPEX0045_HighOrder_v2.pdf https://github.com/CGNS/CGNS/tree/CPEX45_high_order There have been recent (announced in a blog post, yet still undocumented) extensions to the VTU format that would support this, but the format is so bad for parallel IO and time series that I haven't been motivated to extend the PETSc writer. Of course we would welcome contributions. Duan Junming writes: > Dear Jed, > > > Thank you for your help! > > Now I moved the line using "DMViewFromOptions" after the function "PetscDSSetObjective", > > and it works for "-dm_coord_petscspace_degree 3 -petscspace_degree 3". > > > But when I tried degree 4: > > ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 4 -petscspace_degree 4 -dm_refine 1 -dm_view cgns:test.cgns > > Paraview gives an empty render. > > > Using degree 5: > > ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 5 -petscspace_degree 5 -dm_refine 1 -dm_view cgns:test.cgns > > it reports: > > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: Cell type quadrilateral with closure size 36 > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.19.2, unknown > [0]PETSC ERROR: ./ex33 on a arch-darwin-c-debug named JunmingMacBook-Pro.local by Junming Mon Jun 12 20:23:04 2023 > [0]PETSC ERROR: Configure options --download-cgns --download-hdf5 --download-openmpi --download-triangle --with-fc=0 PETSC_ARCH=arch-darwin-c-debug --download-cgns > [0]PETSC ERROR: #1 DMPlexCGNSGetPermutation_Internal() at /Users/Junming/Packages/petsc/src/dm/impls/plex/cgns/plexcgns2.c:533 > [0]PETSC ERROR: #2 DMView_PlexCGNS() at /Users/Junming/Packages/petsc/src/dm/impls/plex/cgns/plexcgns2.c:769 > [0]PETSC ERROR: #3 DMView_Plex() at /Users/Junming/Packages/petsc/src/dm/impls/plex/plex.c:1801 > [0]PETSC ERROR: #4 DMView() at /Users/Junming/Packages/petsc/src/dm/interface/dm.c:996 > [0]PETSC ERROR: #5 PetscObjectView() at /Users/Junming/Packages/petsc/src/sys/objects/destroy.c:78 > [0]PETSC ERROR: #6 PetscObjectViewFromOptions() at /Users/Junming/Packages/petsc/src/sys/objects/destroy.c:128 > [0]PETSC ERROR: #7 DMViewFromOptions() at /Users/Junming/Packages/petsc/src/dm/interface/dm.c:940 > [0]PETSC ERROR: #8 CreateDiscretization() at ex33.c:232 > [0]PETSC ERROR: #9 main() at ex33.c:263 > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -dm_coord_petscspace_degree 5 (source: command line) > [0]PETSC ERROR: -dm_coord_space 0 (source: command line) > [0]PETSC ERROR: -dm_plex_box_faces 1,1 (source: command line) > [0]PETSC ERROR: -dm_plex_simplex 0 (source: command line) > [0]PETSC ERROR: -dm_refine 1 (source: command line) > [0]PETSC ERROR: -dm_view cgns:test.cgns (source: command line) > [0]PETSC ERROR: -mesh_transform annulus (source: command line) > [0]PETSC ERROR: -petscspace_degree 5 (source: command line) > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF > with errorcode 56. > > > Does cgns work for degree >= 4? > > > Junming > > > ________________________________ > From: Jed Brown > Sent: Monday, June 12, 2023 19:07 > To: Duan Junming; Matthew Knepley > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] dm_view of high-order geometry/solution > > And here's an MR to do what you want without any code/arg changes. > > https://gitlab.com/petsc/petsc/-/merge_requests/6588 > > Jed Brown writes: > >> Duan Junming writes: >> >>> Dear Jed, >>> >>> >>> Thank you for the suggestion. >>> >>> When I run tests/ex33.c with >>> >>> ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus -dm_coord_space 0 -dm_coord_petscspace_degree 3 -dm_refine 1 -dm_view cgns:test.cgns >>> >>> and load it using Paraview, >>> >>> the mesh is still with straight lines. >> >> Ah, the viewer is keyed on the field (since the CGNS as supported by Paraview specifies coordinates and fields in the same space). That doesn't exist in your case. If you apply this patch and add `-petscspace_degre 3` to your command, you'll see that high order information is present. Paraview doesn't render as curves in all views, but it has the data. >> >> diff --git i/src/dm/impls/plex/tests/ex33.c w/src/dm/impls/plex/tests/ex33.c >> index 803095bc082..590facfa4f4 100644 >> --- i/src/dm/impls/plex/tests/ex33.c >> +++ w/src/dm/impls/plex/tests/ex33.c >> @@ -198,7 +198,6 @@ PetscErrorCode CreateMesh(MPI_Comm comm, AppCtx *ctx, DM *dm) >> default: >> SETERRQ(comm, PETSC_ERR_ARG_OUTOFRANGE, "Unknown mesh transform %d", ctx->meshTransform); >> } >> - PetscCall(DMViewFromOptions(*dm, NULL, "-dm_view")); >> PetscFunctionReturn(PETSC_SUCCESS); >> } >> >> @@ -227,6 +226,7 @@ static PetscErrorCode CreateDiscretization(DM dm, AppCtx *ctx) >> PetscCall(DMCreateDS(dm)); >> PetscCall(DMGetDS(dm, &ds)); >> PetscCall(PetscDSSetObjective(ds, 0, volume)); >> + PetscCall(DMViewFromOptions(dm, NULL, "-dm_view")); >> PetscFunctionReturn(PETSC_SUCCESS); >> } >> >> >> I can update the viewer to handle the degenerate case of no field (all my models have fields). From jeremy.theler-ext at ansys.com Tue Jun 13 09:16:16 2023 From: jeremy.theler-ext at ansys.com (Jeremy Theler (External)) Date: Tue, 13 Jun 2023 14:16:16 +0000 Subject: [petsc-users] Start logging with -info after PetscInitialize() Message-ID: Hello all. I've asked this question to Satish personally last week at the conference, but I'm stuck so any help would be appreciated. For some reason not worth explaining, I need to activate -info after PetscInitialize() has been already called. I'm trying something like this: PetscOptionsSetValue(NULL, "-info", NULL); PetscInfoSetFromOptions(NULL); The second call fails with [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: PetscInfoSetClasses() cannot be called after PetscInfoGetClass() or PetscInfoProcessClass() [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.19.2, Jun 01, 2023 [0]PETSC ERROR: reflexCLI on a double-int32-release named LIN54Z7SQ3 by jtheler Tue Jun 13 11:08:29 2023 [0]PETSC ERROR: Configure options --download-eigen --download-hdf5 --download-hypre --download-metis --download-mumps --download-parmetis --download-pragmatic --download-scalapack --download-slepc --with-64-bit-indices=no --with-debugging=no --with-precision=double --with-scalar-type=real COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 --download-egads --download-opencascade --download-tetgen [0]PETSC ERROR: #1 PetscInfoSetClasses() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:182 [0]PETSC ERROR: #2 PetscInfoSetFromOptions() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:407 But if I ignore the non-zero return value and I allow my program to continue, the required logging is enabled. I also tried using a local PetscOptions object but the result is the same. Any ideas to avoid that wrong state error? Thanks -- jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.fai at gmail.com Tue Jun 13 09:25:26 2023 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Tue, 13 Jun 2023 10:25:26 -0400 Subject: [petsc-users] Start logging with -info after PetscInitialize() In-Reply-To: References: Message-ID: <412C12C5-5CBB-47EC-AC7C-77B9D8483BA5@gmail.com> Call PetscInfoDestroy() first. https://petsc.org/main/manualpages/Profiling/PetscInfoDestroy/ Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) > On Jun 13, 2023, at 10:16, Jeremy Theler (External) via petsc-users wrote: > > Hello all. > > I've asked this question to Satish personally last week at the conference, but I'm stuck so any help would be appreciated. > For some reason not worth explaining, I need to activate -info after PetscInitialize() has been already called. > I'm trying something like this: > > PetscOptionsSetValue(NULL, "-info", NULL); > PetscInfoSetFromOptions(NULL); > > The second call fails with > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: PetscInfoSetClasses() cannot be called after PetscInfoGetClass() or PetscInfoProcessClass() > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.19.2, Jun 01, 2023 > [0]PETSC ERROR: reflexCLI on a double-int32-release named LIN54Z7SQ3 by jtheler Tue Jun 13 11:08:29 2023 > [0]PETSC ERROR: Configure options --download-eigen --download-hdf5 --download-hypre --download-metis --download-mumps --download-parmetis --download-pragmatic --download-scalapack --download-slepc --with-64-bit-indices=no --with-debugging=no --with-precision=double --with-scalar-type=real COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 --download-egads --download-opencascade --download-tetgen > [0]PETSC ERROR: #1 PetscInfoSetClasses() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:182 > [0]PETSC ERROR: #2 PetscInfoSetFromOptions() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:407 > > But if I ignore the non-zero return value and I allow my program to continue, the required logging is enabled. > I also tried using a local PetscOptions object but the result is the same. > > Any ideas to avoid that wrong state error? > > Thanks > -- > jeremy From bsmith at petsc.dev Tue Jun 13 09:38:35 2023 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 13 Jun 2023 10:38:35 -0400 Subject: [petsc-users] Start logging with -info after PetscInitialize() In-Reply-To: <412C12C5-5CBB-47EC-AC7C-77B9D8483BA5@gmail.com> References: <412C12C5-5CBB-47EC-AC7C-77B9D8483BA5@gmail.com> Message-ID: <84BE7B9F-27E6-4DFC-BECC-80A5CD1B783A@petsc.dev> Jacob, Perhaps the cleanup can be done automatically so as to not require the user to know they need to call the Destroy first? The current API seems odd. > On Jun 13, 2023, at 10:25 AM, Jacob Faibussowitsch wrote: > > Call PetscInfoDestroy() first. > > https://petsc.org/main/manualpages/Profiling/PetscInfoDestroy/ > > Best regards, > > Jacob Faibussowitsch > (Jacob Fai - booss - oh - vitch) > >> On Jun 13, 2023, at 10:16, Jeremy Theler (External) via petsc-users wrote: >> >> Hello all. >> >> I've asked this question to Satish personally last week at the conference, but I'm stuck so any help would be appreciated. >> For some reason not worth explaining, I need to activate -info after PetscInitialize() has been already called. >> I'm trying something like this: >> >> PetscOptionsSetValue(NULL, "-info", NULL); >> PetscInfoSetFromOptions(NULL); >> >> The second call fails with >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Object is in wrong state >> [0]PETSC ERROR: PetscInfoSetClasses() cannot be called after PetscInfoGetClass() or PetscInfoProcessClass() >> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.19.2, Jun 01, 2023 >> [0]PETSC ERROR: reflexCLI on a double-int32-release named LIN54Z7SQ3 by jtheler Tue Jun 13 11:08:29 2023 >> [0]PETSC ERROR: Configure options --download-eigen --download-hdf5 --download-hypre --download-metis --download-mumps --download-parmetis --download-pragmatic --download-scalapack --download-slepc --with-64-bit-indices=no --with-debugging=no --with-precision=double --with-scalar-type=real COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 --download-egads --download-opencascade --download-tetgen >> [0]PETSC ERROR: #1 PetscInfoSetClasses() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:182 >> [0]PETSC ERROR: #2 PetscInfoSetFromOptions() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:407 >> >> But if I ignore the non-zero return value and I allow my program to continue, the required logging is enabled. >> I also tried using a local PetscOptions object but the result is the same. >> >> Any ideas to avoid that wrong state error? >> >> Thanks >> -- >> jeremy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.fai at gmail.com Tue Jun 13 09:55:24 2023 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Tue, 13 Jun 2023 10:55:24 -0400 Subject: [petsc-users] Start logging with -info after PetscInitialize() In-Reply-To: <84BE7B9F-27E6-4DFC-BECC-80A5CD1B783A@petsc.dev> References: <412C12C5-5CBB-47EC-AC7C-77B9D8483BA5@gmail.com> <84BE7B9F-27E6-4DFC-BECC-80A5CD1B783A@petsc.dev> Message-ID: > Perhaps the cleanup can be done automatically so as to not require the user to know they need to call the Destroy first? It is a question of guessing whether the user intended to do this. On the one hand, the PetscInfo() API is so esoteric that anybody manually calling it must surely know what they are doing. But on the other hand, it is useful to catch this. I suppose I can add a note in the error message to call PetscInfoDestroy() if it is truly intended. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) > On Jun 13, 2023, at 10:38, Barry Smith wrote: > > > Jacob, > > Perhaps the cleanup can be done automatically so as to not require the user to know they need to call the Destroy first? The current API seems odd. > > > >> On Jun 13, 2023, at 10:25 AM, Jacob Faibussowitsch wrote: >> >> Call PetscInfoDestroy() first. >> >> https://petsc.org/main/manualpages/Profiling/PetscInfoDestroy/ >> >> Best regards, >> >> Jacob Faibussowitsch >> (Jacob Fai - booss - oh - vitch) >> >>> On Jun 13, 2023, at 10:16, Jeremy Theler (External) via petsc-users wrote: >>> >>> Hello all. >>> >>> I've asked this question to Satish personally last week at the conference, but I'm stuck so any help would be appreciated. >>> For some reason not worth explaining, I need to activate -info after PetscInitialize() has been already called. >>> I'm trying something like this: >>> >>> PetscOptionsSetValue(NULL, "-info", NULL); >>> PetscInfoSetFromOptions(NULL); >>> >>> The second call fails with >>> >>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [0]PETSC ERROR: Object is in wrong state >>> [0]PETSC ERROR: PetscInfoSetClasses() cannot be called after PetscInfoGetClass() or PetscInfoProcessClass() >>> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.19.2, Jun 01, 2023 >>> [0]PETSC ERROR: reflexCLI on a double-int32-release named LIN54Z7SQ3 by jtheler Tue Jun 13 11:08:29 2023 >>> [0]PETSC ERROR: Configure options --download-eigen --download-hdf5 --download-hypre --download-metis --download-mumps --download-parmetis --download-pragmatic --download-scalapack --download-slepc --with-64-bit-indices=no --with-debugging=no --with-precision=double --with-scalar-type=real COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 --download-egads --download-opencascade --download-tetgen >>> [0]PETSC ERROR: #1 PetscInfoSetClasses() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:182 >>> [0]PETSC ERROR: #2 PetscInfoSetFromOptions() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:407 >>> >>> But if I ignore the non-zero return value and I allow my program to continue, the required logging is enabled. >>> I also tried using a local PetscOptions object but the result is the same. >>> >>> Any ideas to avoid that wrong state error? >>> >>> Thanks >>> -- >>> jeremy >> >> > From jeremy.theler-ext at ansys.com Tue Jun 13 10:10:07 2023 From: jeremy.theler-ext at ansys.com (Jeremy Theler (External)) Date: Tue, 13 Jun 2023 15:10:07 +0000 Subject: [petsc-users] Start logging with -info after PetscInitialize() In-Reply-To: <412C12C5-5CBB-47EC-AC7C-77B9D8483BA5@gmail.com> References: <412C12C5-5CBB-47EC-AC7C-77B9D8483BA5@gmail.com> Message-ID: Thanks. That worked. -- jeremy ________________________________ From: Jacob Faibussowitsch Sent: Tuesday, June 13, 2023 11:25 AM To: Jeremy Theler (External) Cc: petsc-users Subject: Re: [petsc-users] Start logging with -info after PetscInitialize() [External Sender] Call PetscInfoDestroy() first. https://petsc.org/main/manualpages/Profiling/PetscInfoDestroy/ Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) > On Jun 13, 2023, at 10:16, Jeremy Theler (External) via petsc-users wrote: > > Hello all. > > I've asked this question to Satish personally last week at the conference, but I'm stuck so any help would be appreciated. > For some reason not worth explaining, I need to activate -info after PetscInitialize() has been already called. > I'm trying something like this: > > PetscOptionsSetValue(NULL, "-info", NULL); > PetscInfoSetFromOptions(NULL); > > The second call fails with > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: PetscInfoSetClasses() cannot be called after PetscInfoGetClass() or PetscInfoProcessClass() > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.19.2, Jun 01, 2023 > [0]PETSC ERROR: reflexCLI on a double-int32-release named LIN54Z7SQ3 by jtheler Tue Jun 13 11:08:29 2023 > [0]PETSC ERROR: Configure options --download-eigen --download-hdf5 --download-hypre --download-metis --download-mumps --download-parmetis --download-pragmatic --download-scalapack --download-slepc --with-64-bit-indices=no --with-debugging=no --with-precision=double --with-scalar-type=real COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 --download-egads --download-opencascade --download-tetgen > [0]PETSC ERROR: #1 PetscInfoSetClasses() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:182 > [0]PETSC ERROR: #2 PetscInfoSetFromOptions() at /home/jtheler/libs/petsc-3.19.2/src/sys/info/verboseinfo.c:407 > > But if I ignore the non-zero return value and I allow my program to continue, the required logging is enabled. > I also tried using a local PetscOptions object but the result is the same. > > Any ideas to avoid that wrong state error? > > Thanks > -- > jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jun 14 14:16:57 2023 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jun 2023 13:16:57 -0600 Subject: [petsc-users] Interpolation Between DMSTAG Objects In-Reply-To: References: Message-ID: On Wed, Jun 7, 2023 at 10:46?AM Colton Bryant < coltonbryant2021 at u.northwestern.edu> wrote: > Hello, > > I am new to PETSc so apologies in advance if there is an easy answer to > this question I've overlooked. > > I have a problem in which the computational domain is divided into two > overlapping regions (overset grids). I would like to discretize each region > as a separate DMSTAG object. What I do not understand is how to go about > interpolating a vector from say DM1 onto nodes in DM2. My current (likely > inefficient) idea is to create vectors of query points on DM2, share these > vectors among all processes, perform the interpolations on DM1, and then > insert the results into the vector on DM2. > > Before I embark on manually setting up the communication here I wanted to > just ask if there is any native support for this kind of operation in PETSc > I may be missing. > > Thanks in advance for any advice! > This sounds like a good first step. We do not currently have support for this, but I would like to support this pattern. I was thinking 1) Create a DMSwarm() with your query points 2) Call DMSwarmMigrate() to put the points on the correct processors This needs some implementation work, but is not super hard. We need to preserve the map so that you can send the results back. 3) Interpolate on the DMStag 3) Use the PetscSF that migrated particles to send back the results Joe and I are working on this support. Thanks, Matt > Best, > Colton Bryant > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From coltonbryant2021 at u.northwestern.edu Wed Jun 14 15:03:50 2023 From: coltonbryant2021 at u.northwestern.edu (Colton Bryant) Date: Wed, 14 Jun 2023 15:03:50 -0500 Subject: [petsc-users] Interpolation Between DMSTAG Objects In-Reply-To: References: Message-ID: Hi Matt, Thanks for the reply! I haven't played with DMSwarm yet but that looks like it would be a very nice solution to this sort of thing. I ended up using VecScatter to transfer the donor grid values to the destination process then doing the interpolation there. Not the most elegant solution but it's definitely not the bottleneck in my code right now! Thanks again, Colton On Wed, Jun 14, 2023 at 2:17?PM Matthew Knepley wrote: > On Wed, Jun 7, 2023 at 10:46?AM Colton Bryant < > coltonbryant2021 at u.northwestern.edu> wrote: > >> Hello, >> >> I am new to PETSc so apologies in advance if there is an easy answer to >> this question I've overlooked. >> >> I have a problem in which the computational domain is divided into two >> overlapping regions (overset grids). I would like to discretize each region >> as a separate DMSTAG object. What I do not understand is how to go about >> interpolating a vector from say DM1 onto nodes in DM2. My current (likely >> inefficient) idea is to create vectors of query points on DM2, share these >> vectors among all processes, perform the interpolations on DM1, and then >> insert the results into the vector on DM2. >> >> Before I embark on manually setting up the communication here I wanted to >> just ask if there is any native support for this kind of operation in PETSc >> I may be missing. >> >> Thanks in advance for any advice! >> > > This sounds like a good first step. We do not currently have support for > this, but I would like to support this pattern. I was thinking > > 1) Create a DMSwarm() with your query points > > 2) Call DMSwarmMigrate() to put the points on the correct processors > > This needs some implementation work, but is not super hard. We need > to preserve the map so > that you can send the results back. > > 3) Interpolate on the DMStag > > 3) Use the PetscSF that migrated particles to send back the results > > Joe and I are working on this support. > > Thanks, > > Matt > > >> Best, >> Colton Bryant >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karsten.lettmann at uni-oldenburg.de Wed Jun 14 02:14:22 2023 From: karsten.lettmann at uni-oldenburg.de (Karsten Lettmann) Date: Wed, 14 Jun 2023 09:14:22 +0200 Subject: [petsc-users] Question about using MatCreateAIJ Message-ID: <4ecd8e7d-2167-460f-725b-3198854a8146@uni-oldenburg.de> Dear all, I'm quite new to PETSC. So I hope the following questions are not too stupid. 1) We have a (Fortran) code, that we want to update from an older PETSC version (petsc.2.3.3-p16) to a newer version. Inside the old code, for creating matrices A, there are function calls of the from: MatCreateMPIAIJ In the reference page for this old version it says: When calling this routine with a single process communicator, a matrix of type SEQAIJ is returned. So I assume the following behavior of this old routine: - for N_proc == 1: ?? a matrix of type SEQAIJ is returned. - for N_proc > 1: ?? a matrix of type MPIAIJ is returned 2a) So, in the new code, we want to have a similar behavior. I found that this function is not present any more in the newer PETSC versions. Instead, one might use: MatCreateAIJ(?.) ( https://petsc.org/release/manualpages/Mat/MatCreateAIJ/ ) If I understand the reference page of the function correctly, then, actually, a similar behavior should be expected: - for N_proc == 1: ?? a matrix of type SEQAIJ is returned. - for N_proc > 1: ?? a matrix of type MPIAIJ is returned 2b) However, on the reference page, there is the note: It is recommended that one use the MatCreate(), MatSetType() and/or MatSetFromOptions(), MatXXXXSetPreallocation() paradigm instead of this routine directly. So, if I want the behavior above, it is recommended to code it like this, isn't it: If (N_Proc == 1) ??? MatCreate(.. ,A ,...) ??? MatSetType(?,A, MATSEQAIJ,..) ??? MatSetSizes(?,A, ..) ??? MatSeqAIJSetPreallocation(,...A,...) else ??? MatCreate(.. ,A ,...) ??? MatSetType(?,A, MATMPIAIJ,..) ??? MatSetSizes(?,A, ..) ??? MatMPIAIJSetPreallocation(,...A,...) end 3) So my questions are: - Is my present understanding correct? If? yes: - Why might using MatCreateAIJ(?.) for my case not be helpful? - So, why is it recommended to use the way 2b) instead of this MatCreateAIJ(?.) ? Best, Karsten -- ICBM Section: Physical Oceanography Universitaet Oldenburg Postfach 5634 D-26046 Oldenburg Germany Tel: +49 (0)441 798 4061 email: karsten.lettmann at uni-oldenburg.de From knepley at gmail.com Thu Jun 15 09:51:19 2023 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Jun 2023 08:51:19 -0600 Subject: [petsc-users] Question about using MatCreateAIJ In-Reply-To: <4ecd8e7d-2167-460f-725b-3198854a8146@uni-oldenburg.de> References: <4ecd8e7d-2167-460f-725b-3198854a8146@uni-oldenburg.de> Message-ID: On Thu, Jun 15, 2023 at 8:32?AM Karsten Lettmann < karsten.lettmann at uni-oldenburg.de> wrote: > Dear all, > > > I'm quite new to PETSC. So I hope the following questions are not too > stupid. > > > 1) We have a (Fortran) code, that we want to update from an older PETSC > version (petsc.2.3.3-p16) to a newer version. > > Inside the old code, for creating matrices A, there are function calls > of the from: > MatCreateMPIAIJ > > In the reference page for this old version it says: > When calling this routine with a single process communicator, a matrix > of type SEQAIJ is returned. > > So I assume the following behavior of this old routine: > - for N_proc == 1: > a matrix of type SEQAIJ is returned. > > - for N_proc > 1: > a matrix of type MPIAIJ is returned > > > > 2a) So, in the new code, we want to have a similar behavior. > > I found that this function is not present any more in the newer PETSC > versions. > > Instead, one might use: MatCreateAIJ(?.) > ( https://petsc.org/release/manualpages/Mat/MatCreateAIJ/ ) > > If I understand the reference page of the function correctly, then, > actually, a similar behavior should be expected: > > - for N_proc == 1: > a matrix of type SEQAIJ is returned. > > - for N_proc > 1: > a matrix of type MPIAIJ is returned > > > 2b) However, on the reference page, there is the note: > > It is recommended that one use the MatCreate(), MatSetType() and/or > MatSetFromOptions(), MatXXXXSetPreallocation() paradigm instead of this > routine directly. > > So, if I want the behavior above, it is recommended to code it like > this, isn't it: > > If (N_Proc == 1) > > MatCreate(.. ,A ,...) > MatSetType(?,A, MATSEQAIJ,..) > MatSetSizes(?,A, ..) > MatSeqAIJSetPreallocation(,...A,...) > > else > > MatCreate(.. ,A ,...) > MatSetType(?,A, MATMPIAIJ,..) > MatSetSizes(?,A, ..) > MatMPIAIJSetPreallocation(,...A,...) > You can use MatCreate(comm, &A); MatSetType(A, MATAIJ); MatSetSizes(A, ...); MatXAIJSetPreallocation(A, ...); We recommend this because we would like to get rid of the convenience functions that wrap up exactly this code. Thanks, Matt > end > > > > 3) So my questions are: > > - Is my present understanding correct? > > If yes: > > - Why might using MatCreateAIJ(?.) for my case not be helpful? > > - So, why is it recommended to use the way 2b) instead of this > MatCreateAIJ(?.) ? > > > Best, Karsten > > > > > -- > ICBM > Section: Physical Oceanography > Universitaet Oldenburg > Postfach 5634 > D-26046 Oldenburg > Germany > > Tel: +49 (0)441 798 4061 > email: karsten.lettmann at uni-oldenburg.de > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Fri Jun 16 17:13:33 2023 From: liufield at gmail.com (neil liu) Date: Fri, 16 Jun 2023 18:13:33 -0400 Subject: [petsc-users] Inquiry about the c++ destructor and PetscFinalize. Message-ID: Dear Petsc developers, I am trying to use Petsc with C++. And came across one issue. Class DMManage has been defined, one default constructor and destructor has been defined there. The code has a runtime error, "double free or corruption". Finally I found that, this is due to PetscFinalize. If I called explicitly the destructor before this PetscFinalze, the error will disappear. Does that mean PetscFinalize do some work to destroy DM? Thanks, #include #include #include #include class DMManage{ PetscSF distributionSF; public: DM dm; DMManage(); ~DMManage(); }; DMManage::DMManage(){ const char filename[] = "ParallelWaveguide.msh"; DM dmDist; PetscViewer viewer; PetscViewerCreate(PETSC_COMM_WORLD, &viewer); PetscViewerSetType(viewer, PETSCVIEWERASCII); PetscViewerFileSetMode(viewer, FILE_MODE_READ); PetscViewerFileSetName(viewer, filename); DMPlexCreateGmsh(PETSC_COMM_WORLD, viewer, PETSC_TRUE, &dm); PetscViewerDestroy(&viewer); PetscInt overlap = 0; DMPlexDistribute(dm, overlap, &distributionSF, &dmDist); std::cout<<&dm< From ckhroulev at alaska.edu Fri Jun 16 17:52:46 2023 From: ckhroulev at alaska.edu (Constantine Khrulev) Date: Fri, 16 Jun 2023 14:52:46 -0800 Subject: [petsc-users] Inquiry about the c++ destructor and PetscFinalize. In-Reply-To: References: Message-ID: <53047c60-78b4-3c7f-5b62-927d9c47e294@alaska.edu> In your code the destructor of DMManage is called at the end of scope, i.e. after the PetscFinalize() call. You should be able to avoid this error by putting "DMManage objDMManage" in a code block to limit its scope and ensure that it is destroyed before PETSc is finalized: int main(int argc, char** argv) { ? PetscFunctionBeginUser; ? PetscCall(PetscInitialize(&argc, &argv, NULL, help)); ? { ? ? DMManage objDMManage; ?? } // objDMManage is destroyed here ? PetscFinalize(); ? return 0; } On 6/16/23 14:13, neil liu wrote: > Dear Petsc developers, > > I am trying to use Petsc with C++. And came across?one issue. > Class DMManage has been defined, one default constructor and > destructor has been defined there. > The code has a runtime error, "double free or corruption". Finally I > found that, this is due to PetscFinalize. If I called explicitly?the > destructor before this PetscFinalze, the error will disappear. > > Does that mean PetscFinalize?do some work to destroy DM? > > Thanks, > > #include > #include > #include > #include > > class DMManage{ > ? PetscSF distributionSF; > public: > ? DM dm; > ? DMManage(); > ? ~DMManage(); > }; > > DMManage::DMManage(){ > ? const char filename[] = "ParallelWaveguide.msh"; > ? DM dmDist; > ? PetscViewer viewer; > ? PetscViewerCreate(PETSC_COMM_WORLD, &viewer); > ? PetscViewerSetType(viewer, PETSCVIEWERASCII); > ? PetscViewerFileSetMode(viewer, FILE_MODE_READ); > ? PetscViewerFileSetName(viewer, filename); > ? DMPlexCreateGmsh(PETSC_COMM_WORLD, viewer, PETSC_TRUE, &dm); > ? PetscViewerDestroy(&viewer); > ? PetscInt overlap = 0; > ? DMPlexDistribute(dm, overlap, &distributionSF, &dmDist); > ? std::cout<<&dm< ? if (dmDist) { > ? ? DMDestroy(&dm); > ? ? dm = dmDist; > ? } > ? DMDestroy(&dmDist); > } > > DMManage::~DMManage(){ > ? DMDestroy(&dm); > } > > int main(int argc, char** argv) { > ? PetscFunctionBeginUser; > ? PetscCall(PetscInitialize(&argc, &argv, NULL, help)); > > ? DMManage objDMManage; > > ? PetscFinalize(); > ? return 0; > } -- Constantine From ysjosh.lo at gmail.com Sun Jun 18 00:18:25 2023 From: ysjosh.lo at gmail.com (YuSh Lo) Date: Sun, 18 Jun 2023 00:18:25 -0500 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> Message-ID: Hi Matthew, After setting DMSetUseNatural to true and calling DMPlexGetGlobalToNatural, I call PestcSFView right away, it gives segmentation fault. I have also tried DMGetNaturalSF, it also gives segmentation fault when calling PetscSFView. I use PETSC_VIEWER_STDOUT_WORLD as PetscViewer Thanks, Josh Matthew Knepley ? 2023?6?9? ?? ??1:04??? > On Fri, Jun 9, 2023 at 1:46?PM YuSh Lo wrote: > >> Hi Barry, >> >> Is there any way to use the mapping generated by DMPlexDistribute along >> with AO? >> > > For Plex, if you turn on > > https://petsc.org/main/manualpages/DM/DMSetUseNatural/ > > before DMPlexDistribute(), it will compute and store a GlobalToNatural > map. This can be > used to map vectors back and forth, but you can extract the SF > > DMPlexGetGlobalToNaturalSF > > > and use that to remap your IS, by extracting the indices. > > THanks, > > Matt > > >> Thanks, >> Josh >> >> >> Barry Smith ? 2023?6?9? ?? ??10:42??? >> >>> >>> You might be looking for >>> https://petsc.org/release/manualpages/AO/AO/#ao >>> >>> >>> On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: >>> >>> An IS is just an array of integers. We need your context. >>> Is this question for sparse matrices? If so look at the documentation on >>> the AIJ matrix construction and the global vertex numbering system. >>> >>> Mark >>> >>> On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: >>> >>>> Hi, >>>> >>>> I have an IS that contains some vertex that is in natural numbering. >>>> How do I map them to global numbering without being distributed? >>>> >>>> Thanks, >>>> Josh >>>> >>> >>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ysjosh.lo at gmail.com Sun Jun 18 01:12:27 2023 From: ysjosh.lo at gmail.com (YuSh Lo) Date: Sun, 18 Jun 2023 01:12:27 -0500 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> Message-ID: I am getting a null PetscSF after calling DMPlexGetGlobalToNatural YuSh Lo ? 2023?6?18? ?? ??12:18??? > Hi Matthew, > > After setting DMSetUseNatural to true and calling DMPlexGetGlobalToNatural, > I call PestcSFView right away, it gives segmentation fault. > I have also tried DMGetNaturalSF, it also gives segmentation fault when > calling PetscSFView. > I use PETSC_VIEWER_STDOUT_WORLD as PetscViewer > > Thanks, > Josh > > > Matthew Knepley ? 2023?6?9? ?? ??1:04??? > >> On Fri, Jun 9, 2023 at 1:46?PM YuSh Lo wrote: >> >>> Hi Barry, >>> >>> Is there any way to use the mapping generated by DMPlexDistribute along >>> with AO? >>> >> >> For Plex, if you turn on >> >> https://petsc.org/main/manualpages/DM/DMSetUseNatural/ >> >> before DMPlexDistribute(), it will compute and store a GlobalToNatural >> map. This can be >> used to map vectors back and forth, but you can extract the SF >> >> DMPlexGetGlobalToNaturalSF >> >> >> and use that to remap your IS, by extracting the indices. >> >> THanks, >> >> Matt >> >> >>> Thanks, >>> Josh >>> >>> >>> Barry Smith ? 2023?6?9? ?? ??10:42??? >>> >>>> >>>> You might be looking for >>>> https://petsc.org/release/manualpages/AO/AO/#ao >>>> >>>> >>>> On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: >>>> >>>> An IS is just an array of integers. We need your context. >>>> Is this question for sparse matrices? If so look at the documentation >>>> on the AIJ matrix construction and the global vertex numbering system. >>>> >>>> Mark >>>> >>>> On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: >>>> >>>>> Hi, >>>>> >>>>> I have an IS that contains some vertex that is in natural numbering. >>>>> How do I map them to global numbering without being distributed? >>>>> >>>>> Thanks, >>>>> Josh >>>>> >>>> >>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karsten.lettmann at uni-oldenburg.de Sun Jun 18 03:26:32 2023 From: karsten.lettmann at uni-oldenburg.de (Karsten Lettmann) Date: Sun, 18 Jun 2023 10:26:32 +0200 Subject: [petsc-users] Question about using MatCreateAIJ In-Reply-To: References: <4ecd8e7d-2167-460f-725b-3198854a8146@uni-oldenburg.de> Message-ID: Dear Matthew, thanks for you help. 1) I tested your suggestion to pass NULL as the arguments for the MatXAIJSetPreallocation. So the old version was: CALL MatCreateMPIAIJ(MPI_GROUP,N_local,N_local,N_global,N_global,0,DNNZ,0,ONNZ,A,IERR) And after you suggestion it is now: ? CALL MatCreate(MPI_GROUP,A,IERR) ? CALL MatSetType(A,MATAIJ,IERR) ? CALL MatSetSizes(A,N_local,N_local,N_global,N_global,IERR) ? CALL MatXAIJSetPreallocation(A,1,DNNZ,ONNZ,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,IERR) ? Setting block-size = 1. 2) Concerning the error with MatResetPreallocation: We have an iterative loop, in which the matrix A is filled very often with different non-zero structure. Further, I read in the manual pages that due to performance issues, one should preallocate enough space, as operations as matsetvalues might be time consuming due to additional on-demand allocations. So I did the following coding in principle: ??? Set matrix A the first time with preallocation ??? iteration-loop start ??? ??? MatResetPreallocation(A,...) ??????? MatZeroEntries (A) ??? ??? Fill Matrix A ??? ??? MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR) ??? iteration-loop end With these settings, the code run with 2 CPU. But with 1 CPU I got an error, which was in MatResetPreallocation. I could not understand, why the above code works with 2 CPU but not with 1 CPU. At the moment, I believe the reason for this error seems to be a pre-check, that is done in SeqAIJ but not in MPIAIJ fo a valid and present matrix A. (Now, an image is included showing the codings of?? : https://petsc.org/release/src/mat/impls/aij/seq/aij.c.html#MatResetPreallocation_SeqAIJ https://petsc.org/release/src/mat/impls/aij/mpi/mpiaij.c.html#MatResetPreallocation_MPIAIJ ) So, it seems for me at the moment, that the first MatResetPreallocation (when the iteration loop is entered the first time) is done on an not-assembled matrix A. So for one CPU I got an error, while 2 CPUs seem to have been more tolerant. (I'm not sure, if this interpretation is correct.) So, I changed the coding in that way, that I check the assembly status before the preallocation. Using the coding: ??? CALL MatAssembled(A,A_assembled,ierr) ??? IF (A_assembled .eqv. PETSC_TRUE) then ??????? CALL MatResetPreallocation(A,ierr) ??? ENDIF then worked for 1 and 2 CPU. 3) There was another finding, which hopefully is correct. Actually, I did this MatResetPreallocation to have a better performance when filling the matrix A later, as was suggested on the manual pages. However, I found (if I did nothing wrong) that this MatResetPreallocation was much more time consuming than the additional (and unwanted) allocations done during filling the matrix. So, in the end, my code seems to be faster, when *not* doing this in the iteration loop: ??? CALL MatAssembled(A,A_assembled,ierr) ??? IF (A_assembled .eqv. PETSC_TRUE) then ??????? CALL MatResetPreallocation(A,ierr) ??? ENDIF As I told you, I'm a beginner to PETSC and I do not know, if I have done it correctly??? Best, Karsten > The arguments are a combination of the AIJ and SBAIJ arguments. You > can just pass NULL for the SBAIJ?args. > > Then I? ran into issues with Resetpreallocation, that I do not > understand. > > Can you send the error? > > ? Thanks, > > ? ? Matt > > I want this, because we have an iterative procedure, where the > matrix A_wave and its non-zero structure are changing very often. > > I try to find the reason for my problem. > > > > I really thank you for your answer, that helped me to understand > things a bit. > > > I wish you all the best, Karsten > > > > Am 15.06.23 um 16:51 schrieb Matthew Knepley: >> >> ACHTUNG!Diese E-Mail kommt von Extern! WARNING! This email >> originated off-campus. >> >> On Thu, Jun 15, 2023 at 8:32?AM Karsten Lettmann >> > > wrote: >> >> Dear all, >> >> >> I'm quite new to PETSC. So I hope the following questions are >> not too >> stupid. >> >> >> 1) We have a (Fortran) code, that we want to update from an >> older PETSC >> version (petsc.2.3.3-p16) to a newer version. >> >> Inside the old code, for creating matrices A, there are >> function calls >> of the from: >> MatCreateMPIAIJ >> >> In the reference page for this old version it says: >> When calling this routine with a single process communicator, >> a matrix >> of type SEQAIJ is returned. >> >> So I assume the following behavior of this old routine: >> - for N_proc == 1: >> ??? a matrix of type SEQAIJ is returned. >> >> - for N_proc > 1: >> ??? a matrix of type MPIAIJ is returned >> >> >> >> 2a) So, in the new code, we want to have a similar behavior. >> >> I found that this function is not present any more in the >> newer PETSC >> versions. >> >> Instead, one might use: MatCreateAIJ(?.) >> ( https://petsc.org/release/manualpages/Mat/MatCreateAIJ/ ) >> >> If I understand the reference page of the function correctly, >> then, >> actually, a similar behavior should be expected: >> >> - for N_proc == 1: >> ??? a matrix of type SEQAIJ is returned. >> >> - for N_proc > 1: >> ??? a matrix of type MPIAIJ is returned >> >> >> 2b) However, on the reference page, there is the note: >> >> It is recommended that one use the MatCreate(), MatSetType() >> and/or >> MatSetFromOptions(), MatXXXXSetPreallocation() paradigm >> instead of this >> routine directly. >> >> So, if I want the behavior above, it is recommended to code >> it like >> this, isn't it: >> >> If (N_Proc == 1) >> >> ???? MatCreate(.. ,A ,...) >> ???? MatSetType(?,A, MATSEQAIJ,..) >> ???? MatSetSizes(?,A, ..) >> ???? MatSeqAIJSetPreallocation(,...A,...) >> >> else >> >> ???? MatCreate(.. ,A ,...) >> ???? MatSetType(?,A, MATMPIAIJ,..) >> ???? MatSetSizes(?,A, ..) >> ???? MatMPIAIJSetPreallocation(,...A,...) >> >> >> You can use >> >> ? MatCreate(comm, &A); >> ? MatSetType(A, MATAIJ); >> ? MatSetSizes(A, ...); >> ??MatXAIJSetPreallocation(A, ...); >> >> We recommend this because we would like to get rid of the >> convenience functions that >> wrap up exactly this code. >> >> ? Thanks, >> >> ? ? ?Matt >> >> end >> >> >> >> 3) So my questions are: >> >> - Is my present understanding correct? >> >> If? yes: >> >> - Why might using MatCreateAIJ(?.) for my case not be helpful? >> >> - So, why is it recommended to use the way 2b) instead of this >> MatCreateAIJ(?.) ? >> >> >> Best, Karsten >> >> >> >> >> -- >> ICBM >> Section: Physical Oceanography >> Universitaet Oldenburg >> Postfach 5634 >> D-26046 Oldenburg >> Germany >> >> Tel:? ? +49 (0)441 798 4061 >> email: karsten.lettmann at uni-oldenburg.de >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > -- > ICBM > Section: Physical Oceanography > Universitaet Oldenburg > Postfach 5634 > D-26046 Oldenburg > Germany > > Tel: +49 (0)441 798 4061 > email:karsten.lettmann at uni-oldenburg.de > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- ICBM Section: Physical Oceanography Universitaet Oldenburg Postfach 5634 D-26046 Oldenburg Germany Tel: +49 (0)441 798 4061 email: karsten.lettmann at uni-oldenburg.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fgcfkbigjgelpmgo.png Type: image/png Size: 147416 bytes Desc: not available URL: From knepley at gmail.com Sun Jun 18 07:37:27 2023 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 18 Jun 2023 08:37:27 -0400 Subject: [petsc-users] Question about using MatCreateAIJ In-Reply-To: References: <4ecd8e7d-2167-460f-725b-3198854a8146@uni-oldenburg.de> Message-ID: On Sun, Jun 18, 2023 at 4:26?AM Karsten Lettmann < karsten.lettmann at uni-oldenburg.de> wrote: > Dear Matthew, > > > thanks for you help. > > > 1) I tested your suggestion to pass NULL as the arguments for the > MatXAIJSetPreallocation. > > So the old version was: > > CALL > MatCreateMPIAIJ(MPI_GROUP,N_local,N_local,N_global,N_global,0,DNNZ,0,ONNZ,A,IERR) > > And after you suggestion it is now: > > CALL MatCreate(MPI_GROUP,A,IERR) > CALL MatSetType(A,MATAIJ,IERR) > CALL MatSetSizes(A,N_local,N_local,N_global,N_global,IERR) > CALL > MatXAIJSetPreallocation(A,1,DNNZ,ONNZ,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,IERR) > > > Setting block-size = 1. > > > 2) Concerning the error with MatResetPreallocation: > > We have an iterative loop, in which the matrix A is filled very often with > different non-zero structure. > Further, I read in the manual pages that due to performance issues, one > should preallocate enough space, as operations as matsetvalues might be > time consuming due to additional > on-demand allocations. > > > So I did the following coding in principle: > > > Set matrix A the first time with preallocation > > > iteration-loop start > > MatResetPreallocation(A,...) > > MatZeroEntries (A) > > Fill Matrix A > > MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR) > > > iteration-loop end > > > With these settings, the code run with 2 CPU. > But with 1 CPU I got an error, which was in MatResetPreallocation. > I could not understand, why the above code works with 2 CPU but not with 1 > CPU. > > At the moment, I believe the reason for this error seems to be a > pre-check, that is done in SeqAIJ but not in MPIAIJ fo a valid and present > matrix A. > > (Now, an image is included showing the codings of : > > https://petsc.org/release/src/mat/impls/aij/seq/aij.c.html#MatResetPreallocation_SeqAIJ > > https://petsc.org/release/src/mat/impls/aij/mpi/mpiaij.c.html#MatResetPreallocation_MPIAIJ > ) > > > > > So, it seems for me at the moment, that the first MatResetPreallocation > (when the iteration loop is entered the first time) is done on an > not-assembled matrix A. > So for one CPU I got an error, while 2 CPUs seem to have been more > tolerant. > (I'm not sure, if this interpretation is correct.) > > > So, I changed the coding in that way, that I check the assembly status > before the preallocation. > > > Using the coding: > > CALL MatAssembled(A,A_assembled,ierr) > IF (A_assembled .eqv. PETSC_TRUE) then > CALL MatResetPreallocation(A,ierr) > ENDIF > > then worked for 1 and 2 CPU. > > > > 3) There was another finding, which hopefully is correct. > > > Actually, I did this MatResetPreallocation to have a better performance > when filling the matrix A later, as was suggested on the manual pages. > > However, I found (if I did nothing wrong) that this MatResetPreallocation > was much more time consuming than the additional (and unwanted) allocations > done during filling the matrix. > > So, in the end, my code seems to be faster, when *not* doing this in the > iteration loop: > > CALL MatAssembled(A,A_assembled,ierr) > IF (A_assembled .eqv. PETSC_TRUE) then > CALL MatResetPreallocation(A,ierr) > ENDIF > > > As I told you, I'm a beginner to PETSC and I do not know, if I have done > it correctly??? > I think this may be correct now. We have rewritten Mat so that inserting values is much more efficient, and can be done online, so preallocation is not really needed anymore. It is possible that this default mechanism is faster than the old preallocation. I would try the code without preallocation, using the latest release, and see how it performs. Thanks, Matt > Best, Karsten > > The arguments are a combination of the AIJ and SBAIJ arguments. You can > just pass NULL for the SBAIJ args. > >> Then I ran into issues with Resetpreallocation, that I do not understand. >> > Can you send the error? > > Thanks, > > Matt > >> I want this, because we have an iterative procedure, where the matrix >> A_wave and its non-zero structure are changing very often. >> >> I try to find the reason for my problem. >> >> >> >> I really thank you for your answer, that helped me to understand things a >> bit. >> >> >> I wish you all the best, Karsten >> >> >> >> Am 15.06.23 um 16:51 schrieb Matthew Knepley: >> >> ACHTUNG! Diese E-Mail kommt von Extern! WARNING! This email originated >> off-campus. >> On Thu, Jun 15, 2023 at 8:32?AM Karsten Lettmann < >> karsten.lettmann at uni-oldenburg.de> wrote: >> >>> Dear all, >>> >>> >>> I'm quite new to PETSC. So I hope the following questions are not too >>> stupid. >>> >>> >>> 1) We have a (Fortran) code, that we want to update from an older PETSC >>> version (petsc.2.3.3-p16) to a newer version. >>> >>> Inside the old code, for creating matrices A, there are function calls >>> of the from: >>> MatCreateMPIAIJ >>> >>> In the reference page for this old version it says: >>> When calling this routine with a single process communicator, a matrix >>> of type SEQAIJ is returned. >>> >>> So I assume the following behavior of this old routine: >>> - for N_proc == 1: >>> a matrix of type SEQAIJ is returned. >>> >>> - for N_proc > 1: >>> a matrix of type MPIAIJ is returned >>> >>> >>> >>> 2a) So, in the new code, we want to have a similar behavior. >>> >>> I found that this function is not present any more in the newer PETSC >>> versions. >>> >>> Instead, one might use: MatCreateAIJ(?.) >>> ( https://petsc.org/release/manualpages/Mat/MatCreateAIJ/ ) >>> >>> If I understand the reference page of the function correctly, then, >>> actually, a similar behavior should be expected: >>> >>> - for N_proc == 1: >>> a matrix of type SEQAIJ is returned. >>> >>> - for N_proc > 1: >>> a matrix of type MPIAIJ is returned >>> >>> >>> 2b) However, on the reference page, there is the note: >>> >>> It is recommended that one use the MatCreate(), MatSetType() and/or >>> MatSetFromOptions(), MatXXXXSetPreallocation() paradigm instead of this >>> routine directly. >>> >>> So, if I want the behavior above, it is recommended to code it like >>> this, isn't it: >>> >>> If (N_Proc == 1) >>> >>> MatCreate(.. ,A ,...) >>> MatSetType(?,A, MATSEQAIJ,..) >>> MatSetSizes(?,A, ..) >>> MatSeqAIJSetPreallocation(,...A,...) >>> >>> else >>> >>> MatCreate(.. ,A ,...) >>> MatSetType(?,A, MATMPIAIJ,..) >>> MatSetSizes(?,A, ..) >>> MatMPIAIJSetPreallocation(,...A,...) >>> >> >> You can use >> >> MatCreate(comm, &A); >> MatSetType(A, MATAIJ); >> MatSetSizes(A, ...); >> MatXAIJSetPreallocation(A, ...); >> >> We recommend this because we would like to get rid of the convenience >> functions that >> wrap up exactly this code. >> >> Thanks, >> >> Matt >> >> >>> end >>> >>> >>> >>> 3) So my questions are: >>> >>> - Is my present understanding correct? >>> >>> If yes: >>> >>> - Why might using MatCreateAIJ(?.) for my case not be helpful? >>> >>> - So, why is it recommended to use the way 2b) instead of this >>> MatCreateAIJ(?.) ? >>> >>> >>> Best, Karsten >>> >>> >>> >>> >>> -- >>> ICBM >>> Section: Physical Oceanography >>> Universitaet Oldenburg >>> Postfach 5634 >>> D-26046 Oldenburg >>> Germany >>> >>> Tel: +49 (0)441 798 4061 >>> email: karsten.lettmann at uni-oldenburg.de >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> ICBM >> Section: Physical Oceanography >> Universitaet Oldenburg >> Postfach 5634 >> D-26046 Oldenburg >> Germany >> >> Tel: +49 (0)441 798 4061 >> email: karsten.lettmann at uni-oldenburg.de >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- > ICBM > Section: Physical Oceanography > Universitaet Oldenburg > Postfach 5634 > D-26046 Oldenburg > Germany > > Tel: +49 (0)441 798 4061 > email: karsten.lettmann at uni-oldenburg.de > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fgcfkbigjgelpmgo.png Type: image/png Size: 147416 bytes Desc: not available URL: From knepley at gmail.com Sun Jun 18 07:42:23 2023 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 18 Jun 2023 08:42:23 -0400 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> Message-ID: On Sun, Jun 18, 2023 at 2:12?AM YuSh Lo wrote: > I am getting a null PetscSF after calling DMPlexGetGlobalToNatural > Did you call DMDIstribute()? This is where the map is created because until then, the map is identity. Thanks, Matt > YuSh Lo ? 2023?6?18? ?? ??12:18??? > >> Hi Matthew, >> >> After setting DMSetUseNatural to true and calling >> DMPlexGetGlobalToNatural, >> I call PestcSFView right away, it gives segmentation fault. >> I have also tried DMGetNaturalSF, it also gives segmentation fault when >> calling PetscSFView. >> I use PETSC_VIEWER_STDOUT_WORLD as PetscViewer >> >> Thanks, >> Josh >> >> >> Matthew Knepley ? 2023?6?9? ?? ??1:04??? >> >>> On Fri, Jun 9, 2023 at 1:46?PM YuSh Lo wrote: >>> >>>> Hi Barry, >>>> >>>> Is there any way to use the mapping generated by DMPlexDistribute along >>>> with AO? >>>> >>> >>> For Plex, if you turn on >>> >>> https://petsc.org/main/manualpages/DM/DMSetUseNatural/ >>> >>> before DMPlexDistribute(), it will compute and store a GlobalToNatural >>> map. This can be >>> used to map vectors back and forth, but you can extract the SF >>> >>> DMPlexGetGlobalToNaturalSF >>> >>> >>> and use that to remap your IS, by extracting the indices. >>> >>> THanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Josh >>>> >>>> >>>> Barry Smith ? 2023?6?9? ?? ??10:42??? >>>> >>>>> >>>>> You might be looking for >>>>> https://petsc.org/release/manualpages/AO/AO/#ao >>>>> >>>>> >>>>> On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: >>>>> >>>>> An IS is just an array of integers. We need your context. >>>>> Is this question for sparse matrices? If so look at the documentation >>>>> on the AIJ matrix construction and the global vertex numbering system. >>>>> >>>>> Mark >>>>> >>>>> On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I have an IS that contains some vertex that is in natural numbering. >>>>>> How do I map them to global numbering without being distributed? >>>>>> >>>>>> Thanks, >>>>>> Josh >>>>>> >>>>> >>>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Sun Jun 18 08:01:21 2023 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sun, 18 Jun 2023 08:01:21 -0500 Subject: [petsc-users] Question about using MatCreateAIJ In-Reply-To: References: <4ecd8e7d-2167-460f-725b-3198854a8146@uni-oldenburg.de> Message-ID: yeah, see a C example at https://gitlab.com/petsc/petsc/-/blob/main/src/mat/tests/ex259.c I guess you can code in this outline with petsc-3.19 MatCreate MatSetSizes MatSetFromOptions iteration-loop start MatResetPreallocation(A,...) Fill Matrix A with MatSetValues MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR) iteration-loop end --Junchao Zhang On Sun, Jun 18, 2023 at 7:37?AM Matthew Knepley wrote: > On Sun, Jun 18, 2023 at 4:26?AM Karsten Lettmann < > karsten.lettmann at uni-oldenburg.de> wrote: > >> Dear Matthew, >> >> >> thanks for you help. >> >> >> 1) I tested your suggestion to pass NULL as the arguments for the >> MatXAIJSetPreallocation. >> >> So the old version was: >> >> CALL >> MatCreateMPIAIJ(MPI_GROUP,N_local,N_local,N_global,N_global,0,DNNZ,0,ONNZ,A,IERR) >> >> And after you suggestion it is now: >> >> CALL MatCreate(MPI_GROUP,A,IERR) >> CALL MatSetType(A,MATAIJ,IERR) >> CALL MatSetSizes(A,N_local,N_local,N_global,N_global,IERR) >> CALL >> MatXAIJSetPreallocation(A,1,DNNZ,ONNZ,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,IERR) >> >> >> Setting block-size = 1. >> >> >> 2) Concerning the error with MatResetPreallocation: >> >> We have an iterative loop, in which the matrix A is filled very often >> with different non-zero structure. >> Further, I read in the manual pages that due to performance issues, one >> should preallocate enough space, as operations as matsetvalues might be >> time consuming due to additional >> on-demand allocations. >> >> >> So I did the following coding in principle: >> >> >> Set matrix A the first time with preallocation >> >> >> iteration-loop start >> >> MatResetPreallocation(A,...) >> >> MatZeroEntries (A) >> >> Fill Matrix A >> >> MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR) >> >> >> iteration-loop end >> >> >> With these settings, the code run with 2 CPU. >> But with 1 CPU I got an error, which was in MatResetPreallocation. >> I could not understand, why the above code works with 2 CPU but not with >> 1 CPU. >> >> At the moment, I believe the reason for this error seems to be a >> pre-check, that is done in SeqAIJ but not in MPIAIJ fo a valid and present >> matrix A. >> >> (Now, an image is included showing the codings of : >> >> https://petsc.org/release/src/mat/impls/aij/seq/aij.c.html#MatResetPreallocation_SeqAIJ >> >> https://petsc.org/release/src/mat/impls/aij/mpi/mpiaij.c.html#MatResetPreallocation_MPIAIJ >> ) >> >> >> >> >> So, it seems for me at the moment, that the first MatResetPreallocation >> (when the iteration loop is entered the first time) is done on an >> not-assembled matrix A. >> So for one CPU I got an error, while 2 CPUs seem to have been more >> tolerant. >> (I'm not sure, if this interpretation is correct.) >> >> >> So, I changed the coding in that way, that I check the assembly status >> before the preallocation. >> >> >> Using the coding: >> >> CALL MatAssembled(A,A_assembled,ierr) >> IF (A_assembled .eqv. PETSC_TRUE) then >> CALL MatResetPreallocation(A,ierr) >> ENDIF >> >> then worked for 1 and 2 CPU. >> >> >> >> 3) There was another finding, which hopefully is correct. >> >> >> Actually, I did this MatResetPreallocation to have a better performance >> when filling the matrix A later, as was suggested on the manual pages. >> >> However, I found (if I did nothing wrong) that this MatResetPreallocation >> was much more time consuming than the additional (and unwanted) allocations >> done during filling the matrix. >> >> So, in the end, my code seems to be faster, when *not* doing this in the >> iteration loop: >> >> CALL MatAssembled(A,A_assembled,ierr) >> IF (A_assembled .eqv. PETSC_TRUE) then >> CALL MatResetPreallocation(A,ierr) >> ENDIF >> >> >> As I told you, I'm a beginner to PETSC and I do not know, if I have done >> it correctly??? >> > I think this may be correct now. We have rewritten Mat so that inserting > values is much more efficient, and > can be done online, so preallocation is not really needed anymore. It is > possible that this default mechanism > is faster than the old preallocation. > > I would try the code without preallocation, using the latest release, and > see how it performs. > > Thanks, > > Matt > >> Best, Karsten >> >> The arguments are a combination of the AIJ and SBAIJ arguments. You can >> just pass NULL for the SBAIJ args. >> >>> Then I ran into issues with Resetpreallocation, that I do not >>> understand. >>> >> Can you send the error? >> >> Thanks, >> >> Matt >> >>> I want this, because we have an iterative procedure, where the matrix >>> A_wave and its non-zero structure are changing very often. >>> >>> I try to find the reason for my problem. >>> >>> >>> >>> I really thank you for your answer, that helped me to understand things >>> a bit. >>> >>> >>> I wish you all the best, Karsten >>> >>> >>> >>> Am 15.06.23 um 16:51 schrieb Matthew Knepley: >>> >>> ACHTUNG! Diese E-Mail kommt von Extern! WARNING! This email originated >>> off-campus. >>> On Thu, Jun 15, 2023 at 8:32?AM Karsten Lettmann < >>> karsten.lettmann at uni-oldenburg.de> wrote: >>> >>>> Dear all, >>>> >>>> >>>> I'm quite new to PETSC. So I hope the following questions are not too >>>> stupid. >>>> >>>> >>>> 1) We have a (Fortran) code, that we want to update from an older PETSC >>>> version (petsc.2.3.3-p16) to a newer version. >>>> >>>> Inside the old code, for creating matrices A, there are function calls >>>> of the from: >>>> MatCreateMPIAIJ >>>> >>>> In the reference page for this old version it says: >>>> When calling this routine with a single process communicator, a matrix >>>> of type SEQAIJ is returned. >>>> >>>> So I assume the following behavior of this old routine: >>>> - for N_proc == 1: >>>> a matrix of type SEQAIJ is returned. >>>> >>>> - for N_proc > 1: >>>> a matrix of type MPIAIJ is returned >>>> >>>> >>>> >>>> 2a) So, in the new code, we want to have a similar behavior. >>>> >>>> I found that this function is not present any more in the newer PETSC >>>> versions. >>>> >>>> Instead, one might use: MatCreateAIJ(?.) >>>> ( https://petsc.org/release/manualpages/Mat/MatCreateAIJ/ ) >>>> >>>> If I understand the reference page of the function correctly, then, >>>> actually, a similar behavior should be expected: >>>> >>>> - for N_proc == 1: >>>> a matrix of type SEQAIJ is returned. >>>> >>>> - for N_proc > 1: >>>> a matrix of type MPIAIJ is returned >>>> >>>> >>>> 2b) However, on the reference page, there is the note: >>>> >>>> It is recommended that one use the MatCreate(), MatSetType() and/or >>>> MatSetFromOptions(), MatXXXXSetPreallocation() paradigm instead of this >>>> routine directly. >>>> >>>> So, if I want the behavior above, it is recommended to code it like >>>> this, isn't it: >>>> >>>> If (N_Proc == 1) >>>> >>>> MatCreate(.. ,A ,...) >>>> MatSetType(?,A, MATSEQAIJ,..) >>>> MatSetSizes(?,A, ..) >>>> MatSeqAIJSetPreallocation(,...A,...) >>>> >>>> else >>>> >>>> MatCreate(.. ,A ,...) >>>> MatSetType(?,A, MATMPIAIJ,..) >>>> MatSetSizes(?,A, ..) >>>> MatMPIAIJSetPreallocation(,...A,...) >>>> >>> >>> You can use >>> >>> MatCreate(comm, &A); >>> MatSetType(A, MATAIJ); >>> MatSetSizes(A, ...); >>> MatXAIJSetPreallocation(A, ...); >>> >>> We recommend this because we would like to get rid of the convenience >>> functions that >>> wrap up exactly this code. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> end >>>> >>>> >>>> >>>> 3) So my questions are: >>>> >>>> - Is my present understanding correct? >>>> >>>> If yes: >>>> >>>> - Why might using MatCreateAIJ(?.) for my case not be helpful? >>>> >>>> - So, why is it recommended to use the way 2b) instead of this >>>> MatCreateAIJ(?.) ? >>>> >>>> >>>> Best, Karsten >>>> >>>> >>>> >>>> >>>> -- >>>> ICBM >>>> Section: Physical Oceanography >>>> Universitaet Oldenburg >>>> Postfach 5634 >>>> D-26046 Oldenburg >>>> Germany >>>> >>>> Tel: +49 (0)441 798 4061 >>>> email: karsten.lettmann at uni-oldenburg.de >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> -- >>> ICBM >>> Section: Physical Oceanography >>> Universitaet Oldenburg >>> Postfach 5634 >>> D-26046 Oldenburg >>> Germany >>> >>> Tel: +49 (0)441 798 4061 >>> email: karsten.lettmann at uni-oldenburg.de >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> ICBM >> Section: Physical Oceanography >> Universitaet Oldenburg >> Postfach 5634 >> D-26046 Oldenburg >> Germany >> >> Tel: +49 (0)441 798 4061 >> email: karsten.lettmann at uni-oldenburg.de >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fgcfkbigjgelpmgo.png Type: image/png Size: 147416 bytes Desc: not available URL: From bsmith at petsc.dev Sun Jun 18 10:22:05 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 18 Jun 2023 11:22:05 -0400 Subject: [petsc-users] Question about using MatCreateAIJ In-Reply-To: References: <4ecd8e7d-2167-460f-725b-3198854a8146@uni-oldenburg.de> Message-ID: <62B9901E-AC79-462C-8E19-1A36E9C78606@petsc.dev> I am concerned this is not good advice being provided. Let's back up and look more closely at your use case. * What is the ratio of new nonzero locations added compared to the initial number of nonzeros for your code, for each of your iterations? * Is it possible for many iterations, no or very few new nonzeros are being added? * Are many previous nonzero values becoming zero (or unneeded) later? Again as a ratio compared to the initial number of nonzeros? * Can you quantify the difference in time between initially filling the matrix and then refilling it using the reset preallocation and not using the reset preallocation? The effect you report that resetting the preallocation results in slower code is possible if relatively few additional nonzero locations are being created. After a matrix is assembled with a given nonzero structure (regardless of how it was filled it, using preallocation or not), setting nonzero values into new locations will be slow due to needing to do possibly many mallocs() (as much as one for each new nonzero introduced). Resetting the initially provided preallocation removes the need for all the new mallocs(), but at the expense of needing additional bookkeeping while setting the values. If you did not preallocate originally, then there is no way to prevent the additional mallocs() the second time through, so if you never preallocate but need to add many new nonzero locations adding the new nonzeros will be time consuming; hence in that situation providing the initial preallocation (taking into account all future nonzeros appearing) will pay off in possibly a big way. I will look at the bug you report for when MatResetPreallocation()is called before the first matrix assembly and see if it can be fixed. Barry > On Jun 18, 2023, at 9:01 AM, Junchao Zhang wrote: > > yeah, see a C example at https://gitlab.com/petsc/petsc/-/blob/main/src/mat/tests/ex259.c > > I guess you can code in this outline with petsc-3.19 > > MatCreate > MatSetSizes > MatSetFromOptions > iteration-loop start > > MatResetPreallocation(A,...) > > Fill Matrix A with MatSetValues > > MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR) > > iteration-loop end > > > --Junchao Zhang > > > On Sun, Jun 18, 2023 at 7:37?AM Matthew Knepley > wrote: >> On Sun, Jun 18, 2023 at 4:26?AM Karsten Lettmann > wrote: >>> Dear Matthew, >>> >>> >>> >>> thanks for you help. >>> >>> >>> >>> 1) I tested your suggestion to pass NULL as the arguments for the MatXAIJSetPreallocation. >>> >>> So the old version was: >>> >>> CALL MatCreateMPIAIJ(MPI_GROUP,N_local,N_local,N_global,N_global,0,DNNZ,0,ONNZ,A,IERR) >>> >>> And after you suggestion it is now: >>> >>> CALL MatCreate(MPI_GROUP,A,IERR) >>> CALL MatSetType(A,MATAIJ,IERR) >>> CALL MatSetSizes(A,N_local,N_local,N_global,N_global,IERR) >>> CALL MatXAIJSetPreallocation(A,1,DNNZ,ONNZ,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,IERR) >>> >>> >>> >>> Setting block-size = 1. >>> >>> >>> >>> 2) Concerning the error with MatResetPreallocation: >>> >>> We have an iterative loop, in which the matrix A is filled very often with different non-zero structure. >>> Further, I read in the manual pages that due to performance issues, one should preallocate enough space, as operations as matsetvalues might be time consuming due to additional >>> on-demand allocations. >>> >>> >>> >>> So I did the following coding in principle: >>> >>> >>> >>> Set matrix A the first time with preallocation >>> >>> >>> >>> iteration-loop start >>> >>> MatResetPreallocation(A,...) >>> >>> MatZeroEntries (A) >>> >>> Fill Matrix A >>> >>> MatAssemblyXX(A_WAVE,MAT_FINAL_ASSEMBLY,IERR) >>> >>> >>> >>> iteration-loop end >>> >>> >>> >>> With these settings, the code run with 2 CPU. >>> But with 1 CPU I got an error, which was in MatResetPreallocation. >>> I could not understand, why the above code works with 2 CPU but not with 1 CPU. >>> >>> At the moment, I believe the reason for this error seems to be a pre-check, that is done in SeqAIJ but not in MPIAIJ fo a valid and present matrix A. >>> >>> (Now, an image is included showing the codings of : >>> https://petsc.org/release/src/mat/impls/aij/seq/aij.c.html#MatResetPreallocation_SeqAIJ >>> https://petsc.org/release/src/mat/impls/aij/mpi/mpiaij.c.html#MatResetPreallocation_MPIAIJ ) >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> So, it seems for me at the moment, that the first MatResetPreallocation (when the iteration loop is entered the first time) is done on an not-assembled matrix A. >>> So for one CPU I got an error, while 2 CPUs seem to have been more tolerant. >>> (I'm not sure, if this interpretation is correct.) >>> >>> >>> >>> So, I changed the coding in that way, that I check the assembly status before the preallocation. >>> >>> >>> >>> Using the coding: >>> >>> CALL MatAssembled(A,A_assembled,ierr) >>> IF (A_assembled .eqv. PETSC_TRUE) then >>> CALL MatResetPreallocation(A,ierr) >>> ENDIF >>> >>> then worked for 1 and 2 CPU. >>> >>> >>> >>> >>> >>> 3) There was another finding, which hopefully is correct. >>> >>> >>> >>> Actually, I did this MatResetPreallocation to have a better performance when filling the matrix A later, as was suggested on the manual pages. >>> >>> >>> However, I found (if I did nothing wrong) that this MatResetPreallocation was much more time consuming than the additional (and unwanted) allocations done during filling the matrix. >>> >>> >>> So, in the end, my code seems to be faster, when not doing this in the iteration loop: >>> >>> CALL MatAssembled(A,A_assembled,ierr) >>> IF (A_assembled .eqv. PETSC_TRUE) then >>> CALL MatResetPreallocation(A,ierr) >>> ENDIF >>> >>> >>> >>> As I told you, I'm a beginner to PETSC and I do not know, if I have done it correctly??? >>> >> I think this may be correct now. We have rewritten Mat so that inserting values is much more efficient, and >> can be done online, so preallocation is not really needed anymore. It is possible that this default mechanism >> is faster than the old preallocation. >> >> I would try the code without preallocation, using the latest release, and see how it performs. >> >> Thanks, >> >> Matt >>> Best, Karsten >>> >>> >>>> The arguments are a combination of the AIJ and SBAIJ arguments. You can just pass NULL for the SBAIJ args. >>>>> Then I ran into issues with Resetpreallocation, that I do not understand. >>>>> >>>> Can you send the error? >>>> >>>> Thanks, >>>> >>>> Matt >>>>> I want this, because we have an iterative procedure, where the matrix A_wave and its non-zero structure are changing very often. >>>>> >>>>> I try to find the reason for my problem. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> I really thank you for your answer, that helped me to understand things a bit. >>>>> >>>>> >>>>> >>>>> I wish you all the best, Karsten >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Am 15.06.23 um 16:51 schrieb Matthew Knepley: >>>>>> ACHTUNG! Diese E-Mail kommt von Extern! WARNING! This email originated off-campus. >>>>>> >>>>>> On Thu, Jun 15, 2023 at 8:32?AM Karsten Lettmann > wrote: >>>>>>> Dear all, >>>>>>> >>>>>>> >>>>>>> I'm quite new to PETSC. So I hope the following questions are not too >>>>>>> stupid. >>>>>>> >>>>>>> >>>>>>> 1) We have a (Fortran) code, that we want to update from an older PETSC >>>>>>> version (petsc.2.3.3-p16) to a newer version. >>>>>>> >>>>>>> Inside the old code, for creating matrices A, there are function calls >>>>>>> of the from: >>>>>>> MatCreateMPIAIJ >>>>>>> >>>>>>> In the reference page for this old version it says: >>>>>>> When calling this routine with a single process communicator, a matrix >>>>>>> of type SEQAIJ is returned. >>>>>>> >>>>>>> So I assume the following behavior of this old routine: >>>>>>> - for N_proc == 1: >>>>>>> a matrix of type SEQAIJ is returned. >>>>>>> >>>>>>> - for N_proc > 1: >>>>>>> a matrix of type MPIAIJ is returned >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2a) So, in the new code, we want to have a similar behavior. >>>>>>> >>>>>>> I found that this function is not present any more in the newer PETSC >>>>>>> versions. >>>>>>> >>>>>>> Instead, one might use: MatCreateAIJ(?.) >>>>>>> ( https://petsc.org/release/manualpages/Mat/MatCreateAIJ/ ) >>>>>>> >>>>>>> If I understand the reference page of the function correctly, then, >>>>>>> actually, a similar behavior should be expected: >>>>>>> >>>>>>> - for N_proc == 1: >>>>>>> a matrix of type SEQAIJ is returned. >>>>>>> >>>>>>> - for N_proc > 1: >>>>>>> a matrix of type MPIAIJ is returned >>>>>>> >>>>>>> >>>>>>> 2b) However, on the reference page, there is the note: >>>>>>> >>>>>>> It is recommended that one use the MatCreate(), MatSetType() and/or >>>>>>> MatSetFromOptions(), MatXXXXSetPreallocation() paradigm instead of this >>>>>>> routine directly. >>>>>>> >>>>>>> So, if I want the behavior above, it is recommended to code it like >>>>>>> this, isn't it: >>>>>>> >>>>>>> If (N_Proc == 1) >>>>>>> >>>>>>> MatCreate(.. ,A ,...) >>>>>>> MatSetType(?,A, MATSEQAIJ,..) >>>>>>> MatSetSizes(?,A, ..) >>>>>>> MatSeqAIJSetPreallocation(,...A,...) >>>>>>> >>>>>>> else >>>>>>> >>>>>>> MatCreate(.. ,A ,...) >>>>>>> MatSetType(?,A, MATMPIAIJ,..) >>>>>>> MatSetSizes(?,A, ..) >>>>>>> MatMPIAIJSetPreallocation(,...A,...) >>>>>> >>>>>> You can use >>>>>> >>>>>> MatCreate(comm, &A); >>>>>> MatSetType(A, MATAIJ); >>>>>> MatSetSizes(A, ...); >>>>>> MatXAIJSetPreallocation(A, ...); >>>>>> >>>>>> We recommend this because we would like to get rid of the convenience functions that >>>>>> wrap up exactly this code. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>>> end >>>>>>> >>>>>>> >>>>>>> >>>>>>> 3) So my questions are: >>>>>>> >>>>>>> - Is my present understanding correct? >>>>>>> >>>>>>> If yes: >>>>>>> >>>>>>> - Why might using MatCreateAIJ(?.) for my case not be helpful? >>>>>>> >>>>>>> - So, why is it recommended to use the way 2b) instead of this >>>>>>> MatCreateAIJ(?.) ? >>>>>>> >>>>>>> >>>>>>> Best, Karsten >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> ICBM >>>>>>> Section: Physical Oceanography >>>>>>> Universitaet Oldenburg >>>>>>> Postfach 5634 >>>>>>> D-26046 Oldenburg >>>>>>> Germany >>>>>>> >>>>>>> Tel: +49 (0)441 798 4061 >>>>>>> email: karsten.lettmann at uni-oldenburg.de >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> -- >>>>> ICBM >>>>> Section: Physical Oceanography >>>>> Universitaet Oldenburg >>>>> Postfach 5634 >>>>> D-26046 Oldenburg >>>>> Germany >>>>> >>>>> Tel: +49 (0)441 798 4061 >>>>> email: karsten.lettmann at uni-oldenburg.de >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>> -- >>> ICBM >>> Section: Physical Oceanography >>> Universitaet Oldenburg >>> Postfach 5634 >>> D-26046 Oldenburg >>> Germany >>> >>> Tel: +49 (0)441 798 4061 >>> email: karsten.lettmann at uni-oldenburg.de >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junming.duan at epfl.ch Sun Jun 18 13:12:43 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Sun, 18 Jun 2023 18:12:43 +0000 Subject: [petsc-users] Advice on small block matrix vector multiplication Message-ID: <03c2dfc5a55e46c09290290d510eb5fc@epfl.ch> Dear all, I am using DMPlex to manage the unknowns, two fields, one for pressure, and one for velocities with two/three components, defined in each cell. They're represented by polynomials, with N (10~50) dofs for each component. I have an operator which can be written in a matrix form (N-by-N, dense), to be applied on the pressure field or each component of the velocities in each cell (the same for each cell and also for each component). I was wondering which matrix should be defined to implement the block matrix-vector multiplication, here block means the pressure or the component of the velocities. Maybe a sequential block mat? Could you recommend any example? Or I just implement this matrix-vector multiplication by hand? Thanks! Junming -------------- next part -------------- An HTML attachment was scrubbed... URL: From ysjosh.lo at gmail.com Sun Jun 18 13:21:46 2023 From: ysjosh.lo at gmail.com (YuSh Lo) Date: Sun, 18 Jun 2023 13:21:46 -0500 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> Message-ID: Hi Matthew, Yes, I have called DMPlexDistribute(). so what I do is: CALL DMSetUseNatural(serialDM, PETSC_TRUE, errorCode) CALL DMPlexDistribute(serialDM, 0, migrationSF, distributedDM, errorCode) CALL DMPlexGetGlobalToNaturalSF(serialDM, naturalSF, errorCode) CALL PetscSFView(naturalSF, PETSC_VIEWER_STDOUT_WORLD, errorCode) The naturalSF is null. Thanks, Josh Matthew Knepley ? 2023?6?18? ?? ??7:42??? > On Sun, Jun 18, 2023 at 2:12?AM YuSh Lo wrote: > >> I am getting a null PetscSF after calling DMPlexGetGlobalToNatural >> > > Did you call DMDIstribute()? This is where the map is created because > until then, the map is identity. > > Thanks, > > Matt > > >> YuSh Lo ? 2023?6?18? ?? ??12:18??? >> >>> Hi Matthew, >>> >>> After setting DMSetUseNatural to true and calling >>> DMPlexGetGlobalToNatural, >>> I call PestcSFView right away, it gives segmentation fault. >>> I have also tried DMGetNaturalSF, it also gives segmentation fault when >>> calling PetscSFView. >>> I use PETSC_VIEWER_STDOUT_WORLD as PetscViewer >>> >>> Thanks, >>> Josh >>> >>> >>> Matthew Knepley ? 2023?6?9? ?? ??1:04??? >>> >>>> On Fri, Jun 9, 2023 at 1:46?PM YuSh Lo wrote: >>>> >>>>> Hi Barry, >>>>> >>>>> Is there any way to use the mapping generated by DMPlexDistribute >>>>> along with AO? >>>>> >>>> >>>> For Plex, if you turn on >>>> >>>> https://petsc.org/main/manualpages/DM/DMSetUseNatural/ >>>> >>>> before DMPlexDistribute(), it will compute and store a GlobalToNatural >>>> map. This can be >>>> used to map vectors back and forth, but you can extract the SF >>>> >>>> DMPlexGetGlobalToNaturalSF >>>> >>>> >>>> and use that to remap your IS, by extracting the indices. >>>> >>>> THanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> Josh >>>>> >>>>> >>>>> Barry Smith ? 2023?6?9? ?? ??10:42??? >>>>> >>>>>> >>>>>> You might be looking for >>>>>> https://petsc.org/release/manualpages/AO/AO/#ao >>>>>> >>>>>> >>>>>> On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: >>>>>> >>>>>> An IS is just an array of integers. We need your context. >>>>>> Is this question for sparse matrices? If so look at the documentation >>>>>> on the AIJ matrix construction and the global vertex numbering system. >>>>>> >>>>>> Mark >>>>>> >>>>>> On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I have an IS that contains some vertex that is in natural numbering. >>>>>>> How do I map them to global numbering without being distributed? >>>>>>> >>>>>>> Thanks, >>>>>>> Josh >>>>>>> >>>>>> >>>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Jun 18 13:35:25 2023 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 18 Jun 2023 14:35:25 -0400 Subject: [petsc-users] Advice on small block matrix vector multiplication In-Reply-To: <03c2dfc5a55e46c09290290d510eb5fc@epfl.ch> References: <03c2dfc5a55e46c09290290d510eb5fc@epfl.ch> Message-ID: On Sun, Jun 18, 2023 at 2:13?PM Duan Junming via petsc-users < petsc-users at mcs.anl.gov> wrote: > Dear all, > > I am using DMPlex to manage the unknowns, two fields, one for pressure, > and one for velocities with two/three components, defined in each cell. > They're represented by polynomials, with N (10~50) dofs for each component. > I have an operator which can be written in a matrix form (N-by-N, dense), > to be applied on the pressure field or each component of the velocities in > each cell (the same for each cell and also for each component). > I was wondering which matrix should be defined to implement the block > matrix-vector multiplication, here block means the pressure or the > component of the velocities. Maybe a sequential block mat? Could you > recommend any example? > Or I just implement this matrix-vector multiplication by hand? > 1) It sounds like you have a collocated discretization, meaning p,u,v,w are all at the same spots. Is this true? 2) You have a dense operator, like FFT, that can act on each component 3) I think you should make a vector with blocksize d+1 and extract the components with https://petsc.org/main/manualpages/Vec/VecStrideGather/ then act on them, then restore with https://petsc.org/main/manualpages/Vec/VecStrideScatter/ You can use the *All() versions to do all the components at once. Thanks, Matt > Thanks! > Junming > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Jun 18 13:39:28 2023 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 18 Jun 2023 14:39:28 -0400 Subject: [petsc-users] IS natural numbering to global numbering In-Reply-To: References: <8AE62C1D-87E5-46E4-A948-19EC22000886@petsc.dev> Message-ID: On Sun, Jun 18, 2023 at 2:21?PM YuSh Lo wrote: > Hi Matthew, > > Yes, I have called DMPlexDistribute(). > > so what I do is: > > CALL DMSetUseNatural(serialDM, PETSC_TRUE, errorCode) > CALL DMPlexDistribute(serialDM, 0, migrationSF, distributedDM, errorCode) > CALL DMPlexGetGlobalToNaturalSF(serialDM, naturalSF, errorCode) > CALL PetscSFView(naturalSF, PETSC_VIEWER_STDOUT_WORLD, errorCode) > > The naturalSF is null. > You can see here: https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexdistribute.c#L1836 that the natural SF is created when you distribute. Are you running in serial? https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexdistribute.c#L1715 No SF is created in serial because we do not reorder unless distributing. Thanks, Matt > Thanks, > Josh > > > > Matthew Knepley ? 2023?6?18? ?? ??7:42??? > >> On Sun, Jun 18, 2023 at 2:12?AM YuSh Lo wrote: >> >>> I am getting a null PetscSF after calling DMPlexGetGlobalToNatural >>> >> >> Did you call DMDIstribute()? This is where the map is created because >> until then, the map is identity. >> >> Thanks, >> >> Matt >> >> >>> YuSh Lo ? 2023?6?18? ?? ??12:18??? >>> >>>> Hi Matthew, >>>> >>>> After setting DMSetUseNatural to true and calling >>>> DMPlexGetGlobalToNatural, >>>> I call PestcSFView right away, it gives segmentation fault. >>>> I have also tried DMGetNaturalSF, it also gives segmentation fault when >>>> calling PetscSFView. >>>> I use PETSC_VIEWER_STDOUT_WORLD as PetscViewer >>>> >>>> Thanks, >>>> Josh >>>> >>>> >>>> Matthew Knepley ? 2023?6?9? ?? ??1:04??? >>>> >>>>> On Fri, Jun 9, 2023 at 1:46?PM YuSh Lo wrote: >>>>> >>>>>> Hi Barry, >>>>>> >>>>>> Is there any way to use the mapping generated by DMPlexDistribute >>>>>> along with AO? >>>>>> >>>>> >>>>> For Plex, if you turn on >>>>> >>>>> https://petsc.org/main/manualpages/DM/DMSetUseNatural/ >>>>> >>>>> before DMPlexDistribute(), it will compute and store a GlobalToNatural >>>>> map. This can be >>>>> used to map vectors back and forth, but you can extract the SF >>>>> >>>>> DMPlexGetGlobalToNaturalSF >>>>> >>>>> >>>>> and use that to remap your IS, by extracting the indices. >>>>> >>>>> THanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks, >>>>>> Josh >>>>>> >>>>>> >>>>>> Barry Smith ? 2023?6?9? ?? ??10:42??? >>>>>> >>>>>>> >>>>>>> You might be looking for >>>>>>> https://petsc.org/release/manualpages/AO/AO/#ao >>>>>>> >>>>>>> >>>>>>> On Jun 9, 2023, at 11:02 AM, Mark Adams wrote: >>>>>>> >>>>>>> An IS is just an array of integers. We need your context. >>>>>>> Is this question for sparse matrices? If so look at the >>>>>>> documentation on the AIJ matrix construction and the global vertex >>>>>>> numbering system. >>>>>>> >>>>>>> Mark >>>>>>> >>>>>>> On Thu, Jun 8, 2023 at 1:15?PM YuSh Lo wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I have an IS that contains some vertex that is in natural >>>>>>>> numbering. How do I map them to global numbering without being distributed? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Josh >>>>>>>> >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junming.duan at epfl.ch Sun Jun 18 13:49:55 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Sun, 18 Jun 2023 18:49:55 +0000 Subject: [petsc-users] Advice on small block matrix vector multiplication In-Reply-To: References: <03c2dfc5a55e46c09290290d510eb5fc@epfl.ch>, Message-ID: <1a68fb2e3b954fa4bd06c5b8c6fae393@epfl.ch> From: knepley at gmail.com Sent: Sunday, June 18, 2023 20:35 To: Duan Junming Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Advice on small block matrix vector multiplication On Sun, Jun 18, 2023 at 2:13?PM Duan Junming via petsc-users > wrote: Dear all, I am using DMPlex to manage the unknowns, two fields, one for pressure, and one for velocities with two/three components, defined in each cell. They're represented by polynomials, with N (10~50) dofs for each component. I have an operator which can be written in a matrix form (N-by-N, dense), to be applied on the pressure field or each component of the velocities in each cell (the same for each cell and also for each component). I was wondering which matrix should be defined to implement the block matrix-vector multiplication, here block means the pressure or the component of the velocities. Maybe a sequential block mat? Could you recommend any example? Or I just implement this matrix-vector multiplication by hand? Dear Matt, Thank you for your quick reply! 1) It sounds like you have a collocated discretization, meaning p,u,v,w are all at the same spots. Is this true? You're right. They're collocated at the same position. 2) You have a dense operator, like FFT, that can act on each component Right, a dense operator applied on each component. 3) I think you should make a vector with blocksize d+1 and extract the components with https://petsc.org/main/manualpages/Vec/VecStrideGather/ then act on them, then restore with https://petsc.org/main/manualpages/Vec/VecStrideScatter/ You can use the *All() versions to do all the components at once. Does this function work with the global/local vector generated from DMPlex? Now the vector is like: p_1, p_2, ..., p_N, u_1, v_1, w_1, ..., u_N, v_N, w_N. Thanks, Matt Thanks! Junming -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Jun 18 13:55:12 2023 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 18 Jun 2023 14:55:12 -0400 Subject: [petsc-users] Advice on small block matrix vector multiplication In-Reply-To: <1a68fb2e3b954fa4bd06c5b8c6fae393@epfl.ch> References: <03c2dfc5a55e46c09290290d510eb5fc@epfl.ch> <1a68fb2e3b954fa4bd06c5b8c6fae393@epfl.ch> Message-ID: On Sun, Jun 18, 2023 at 2:49?PM Duan Junming wrote: > *From:* knepley at gmail.com > *Sent:* Sunday, June 18, 2023 20:35 > *To:* Duan Junming > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Advice on small block matrix vector > multiplication > > > On Sun, Jun 18, 2023 at 2:13?PM Duan Junming via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Dear all, >> > >> I am using DMPlex to manage the unknowns, two fields, one for pressure, >> and one for velocities with two/three components, defined in each cell. >> They're represented by polynomials, with N (10~50) dofs for each component. >> > I have an operator which can be written in a matrix form (N-by-N, dense), >> to be applied on the pressure field or each component of the velocities in >> each cell (the same for each cell and also for each component). >> > I was wondering which matrix should be defined to implement the block >> matrix-vector multiplication, here block means the pressure or the >> component of the velocities. Maybe a sequential block mat? Could you >> recommend any example? >> > Or I just implement this matrix-vector multiplication by hand? >> > > Dear Matt, > > Thank you for your quick reply! > > > 1) It sounds like you have a collocated discretization, meaning p,u,v,w > are all at the same spots. Is this true? > > You're right. They're collocated at the same position. > > > 2) You have a dense operator, like FFT, that can act on each component > > Right, a dense operator applied on each component. > > > 3) I think you should make a vector with blocksize d+1 and extract the > components with > > https://petsc.org/main/manualpages/Vec/VecStrideGather/ > > then act on them, then restore with > > https://petsc.org/main/manualpages/Vec/VecStrideScatter/ > > You can use the *All() versions to do all the components at once. > > Does this function work with the global/local vector generated from DMPlex? > It should. By default, Plex orders all unknowns on a mesh point by field. > Now the vector is like: p_1, p_2, ..., p_N, u_1, v_1, w_1, ..., u_N, v_N, > w_N. > When you extract a component, it will have one field in order, as above. Thanks, Matt > Thanks, > > Matt > > > Thanks! >> Junming >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matthias.Hesselmann at avt.rwth-aachen.de Mon Jun 19 16:22:13 2023 From: Matthias.Hesselmann at avt.rwth-aachen.de (Matthias Hesselmann) Date: Mon, 19 Jun 2023 21:22:13 +0000 Subject: [petsc-users] Symbol lookup errors after change of module system on cluster Message-ID: <7ed257a749fc415e89cbed00117c5222@avt.rwth-aachen.de> Dear users, since the operating system of the cluster I am running my application on with PETSC (Version 3.19) has been changed from CentOS 7 to Rocky Linux 8 and also the module system has been changed to Lmod, I get the following error message when running my application: Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. mpiexec noticed that process rank 0 with PID 0 on node login18-x-2 exited on signal 15 (Terminated). make: *** [Makefile:96: cathode-run] Error 143 When running the application with LD_DEBUG=files, it says that certain symbols cannot be looked up by PETSC, e.g.: /cvmfs/software.hpc.rwth.de/Linux/RH8/x86_64/intel/skylake_avx512/software/UCX/1.12.1-GCCcore-11.3.0/lib/ucx/libuct_cma.so.0: error: symbol lookup error: undefined symbol: ucs_module_global_init (fatal) [...] /rwthfs/rz/cluster/home/mh787286/petsc/arch-linux-c-debug/lib/libpetsc.so.3.19: error: symbol lookup error: undefined symbol: MPID_Abort (fatal) /rwthfs/rz/cluster/home/mh787286/petsc/arch-linux-c-debug/lib/libpetsc.so.3.19: error: symbol lookup error: undefined symbol: ps_tool_initialize (fatal) I attached the output in the "LD_DEBUG_make_cathode-run" file. When I look up the dependencies of libpetsc.so.3.19 with the ldd command, I can find the locations of the dependent libraries listed in the LD_LIBRARY_PATH (see configure.log). Thus, PETSC should be able to link to these libraries. I load the modules GCC/11.3.0 and OpenMPI/4.1.4. Furthermore, please also find the make file attached as "Makefile" as well as the configure.log and make.log. Is there anything I need to change in the make file to adapt it to the new module system, or are there any issues with missing links/ libraries after the update? Kind regards, Matthias -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1961538 bytes Desc: configure.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 21695 bytes Desc: make.log URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Makefile.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: LD_DEBUG_make_cathode-run.txt URL: From balay at mcs.anl.gov Mon Jun 19 16:29:58 2023 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 19 Jun 2023 16:29:58 -0500 (CDT) Subject: [petsc-users] Symbol lookup errors after change of module system on cluster In-Reply-To: <7ed257a749fc415e89cbed00117c5222@avt.rwth-aachen.de> References: <7ed257a749fc415e89cbed00117c5222@avt.rwth-aachen.de> Message-ID: <9f26af9a-7c20-b74f-71cd-5fe90f1ec88e@mcs.anl.gov> Are you able to run a simple MPI test code - say https://raw.githubusercontent.com/pmodels/mpich/main/examples/cpi.c with this compiler/mpi setup? Also - do you get this error with a petsc example [using the corresponding petsc makefile?] Satish On Mon, 19 Jun 2023, Matthias Hesselmann wrote: > Dear users, > > since the operating system of the cluster I am running my application on with PETSC (Version 3.19) has been changed from CentOS 7 to Rocky Linux 8 and also the module system has been changed to Lmod, I get the following error message when running my application: > > Primary job terminated normally, but 1 process returned > a non-zero exit code. Per user-direction, the job has been aborted. > mpiexec noticed that process rank 0 with PID 0 on node login18-x-2 exited on signal 15 (Terminated). > make: *** [Makefile:96: cathode-run] Error 143 > > When running the application with LD_DEBUG=files, it says that certain symbols cannot be looked up by PETSC, e.g.: > > /cvmfs/software.hpc.rwth.de/Linux/RH8/x86_64/intel/skylake_avx512/software/UCX/1.12.1-GCCcore-11.3.0/lib/ucx/libuct_cma.so.0: error: symbol lookup error: undefined symbol: ucs_module_global_init (fatal) > [...] > /rwthfs/rz/cluster/home/mh787286/petsc/arch-linux-c-debug/lib/libpetsc.so.3.19: error: symbol lookup error: undefined symbol: MPID_Abort (fatal) > /rwthfs/rz/cluster/home/mh787286/petsc/arch-linux-c-debug/lib/libpetsc.so.3.19: error: symbol lookup error: undefined symbol: ps_tool_initialize (fatal) > > I attached the output in the "LD_DEBUG_make_cathode-run" file. When I look up the dependencies of libpetsc.so.3.19 with the ldd command, I can find the locations of the dependent libraries listed in the LD_LIBRARY_PATH (see configure.log). Thus, PETSC should be able to link to these libraries. > > I load the modules GCC/11.3.0 and OpenMPI/4.1.4. Furthermore, please also find the make file attached as "Makefile" as well as the configure.log and make.log. > > Is there anything I need to change in the make file to adapt it to the new module system, or are there any issues with missing links/ libraries after the update? > > Kind regards, > Matthias > > > From diegomagela at usp.br Tue Jun 20 06:41:56 2023 From: diegomagela at usp.br (Diego Magela Lemos) Date: Tue, 20 Jun 2023 08:41:56 -0300 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? Message-ID: Considering, for instance, the following COO sparse matrix format, with repeated indices: std::vector rows{0, 0, 1, 2, 3, 4}; std::vector cols{0, 0, 1, 2, 3, 4}; std::vector values{2, -1, 2, 3, 4, 5}; that represents a 5x5 diagonal matrix A. So far, the code that I have is: // fill_in_matrix.cc static char help[] = "Fill in a parallel COO format sparse matrix."; #include #include int main(int argc, char **args){ Mat A; PetscInt m = 5, i, Istart, Iend; PetscCall(PetscInitialize(&argc, &args, NULL, help)); PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); PetscCall(MatSetFromOptions(A)); PetscCall(MatSetUp(A)); PetscCall(MatGetOwnershipRange(A, &Istart, &Iend)); std::vector II{0, 0, 1, 2, 3, 4}; std::vector JJ{0, 0, 1, 2, 3, 4}; std::vector XX{2, -1, 2, 3, 4, 5}; for (i = Istart; i < Iend; i++) PetscCall(MatSetValues(A, 1, &II.at(i), 1, &JJ.at(i), &XX.at(i), ADD_VALUES)); PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); PetscCall(MatDestroy(&A)); PetscCall(PetscFinalize()); return 0; } When running it with petscmpiexec -n 4 ./fill_in_matrix I get Mat Object: 4 MPI processes type: mpiaij row 0: (0, 1.) row 1: (1, 2.) row 2: (2, 3.) row 3: (3, 4.) row 4: Which is missing the entry of the last row. What am I missing? Even better, which would be the best way to fill in this matrix? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tisaac at anl.gov Tue Jun 20 09:38:01 2023 From: tisaac at anl.gov (Isaac, Toby) Date: Tue, 20 Jun 2023 14:38:01 +0000 Subject: [petsc-users] 2023 PETSc Annual Meeting slides and videos Message-ID: Two weeks ago we held the 2023 PETSc Annual Meeting on the campus of the Illinois Institute of Technology. The meeting was a lot of fun: we had over 30 speakers from industry, academia, and national laboratories present on their research and experiences with PETSc, including several minitutorials on recently added features of the library. You can now find slides and videos from most of these presentations on the meeting's permanent site, . Thanks to all who contributed and attended! We will hold the annual meeting May 23 & 24, 2024 in Cologne, Germany. Please let us know if you have any suggestions for this upcoming meeting. We hope to see you there! On behalf of the organizing committee, Toby Isaac From knepley at gmail.com Tue Jun 20 10:06:51 2023 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 20 Jun 2023 11:06:51 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: Message-ID: On Tue, Jun 20, 2023 at 10:55?AM Diego Magela Lemos via petsc-users < petsc-users at mcs.anl.gov> wrote: > Considering, for instance, the following COO sparse matrix format, with > repeated indices: > > std::vector rows{0, 0, 1, 2, 3, 4}; > std::vector cols{0, 0, 1, 2, 3, 4}; > std::vector values{2, -1, 2, 3, 4, 5}; > > that represents a 5x5 diagonal matrix A. > > So far, the code that I have is: > > // fill_in_matrix.cc > static char help[] = "Fill in a parallel COO format sparse matrix."; > #include #include > int main(int argc, char **args){ > Mat A; > PetscInt m = 5, i, Istart, Iend; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); > PetscCall(MatSetFromOptions(A)); > PetscCall(MatSetUp(A)); > PetscCall(MatGetOwnershipRange(A, &Istart, &Iend)); > > std::vector II{0, 0, 1, 2, 3, 4}; > std::vector JJ{0, 0, 1, 2, 3, 4}; > std::vector XX{2, -1, 2, 3, 4, 5}; > > for (i = Istart; i < Iend; i++) > PetscCall(MatSetValues(A, 1, &II.at(i), 1, &JJ.at(i), &XX.at(i), ADD_VALUES)); > > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > PetscCall(MatDestroy(&A)); > PetscCall(PetscFinalize()); > return 0; > } > > When running it with > > petscmpiexec -n 4 ./fill_in_matrix > > > I get > > > Mat Object: 4 MPI processes > > type: mpiaij > row 0: (0, 1.) > row 1: (1, 2.) > row 2: (2, 3.) > row 3: (3, 4.) > row 4: > > > Which is missing the entry of the last row. > > What am I missing? Even better, which would be the best way to fill in this matrix? > > We have a new interface for this: https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/ Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Tue Jun 20 10:18:03 2023 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Tue, 20 Jun 2023 17:18:03 +0200 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: Message-ID: The loop should iterate on the number of entries of the array, not the number of local rows On Tue, Jun 20, 2023, 17:07 Matthew Knepley wrote: > On Tue, Jun 20, 2023 at 10:55?AM Diego Magela Lemos via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Considering, for instance, the following COO sparse matrix format, with >> repeated indices: >> >> std::vector rows{0, 0, 1, 2, 3, 4}; >> std::vector cols{0, 0, 1, 2, 3, 4}; >> std::vector values{2, -1, 2, 3, 4, 5}; >> >> that represents a 5x5 diagonal matrix A. >> >> So far, the code that I have is: >> >> // fill_in_matrix.cc >> static char help[] = "Fill in a parallel COO format sparse matrix."; >> #include #include >> int main(int argc, char **args){ >> Mat A; >> PetscInt m = 5, i, Istart, Iend; >> >> PetscCall(PetscInitialize(&argc, &args, NULL, help)); >> >> PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); >> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); >> PetscCall(MatSetFromOptions(A)); >> PetscCall(MatSetUp(A)); >> PetscCall(MatGetOwnershipRange(A, &Istart, &Iend)); >> >> std::vector II{0, 0, 1, 2, 3, 4}; >> std::vector JJ{0, 0, 1, 2, 3, 4}; >> std::vector XX{2, -1, 2, 3, 4, 5}; >> >> for (i = Istart; i < Iend; i++) >> PetscCall(MatSetValues(A, 1, &II.at(i), 1, &JJ.at(i), &XX.at(i), ADD_VALUES)); >> >> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); >> >> PetscCall(MatDestroy(&A)); >> PetscCall(PetscFinalize()); >> return 0; >> } >> >> When running it with >> >> petscmpiexec -n 4 ./fill_in_matrix >> >> >> I get >> >> >> Mat Object: 4 MPI processes >> >> type: mpiaij >> row 0: (0, 1.) >> row 1: (1, 2.) >> row 2: (2, 3.) >> row 3: (3, 4.) >> row 4: >> >> >> Which is missing the entry of the last row. >> >> What am I missing? Even better, which would be the best way to fill in this matrix? >> >> We have a new interface for this: > > https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/ > > Thanks, > > Matt > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jun 20 11:13:23 2023 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 20 Jun 2023 12:13:23 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: Message-ID: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> Since you have 6 entries that needed to be added to the matrix you will need to call MatSetValues() six time for the six entries. > On Jun 20, 2023, at 11:06 AM, Matthew Knepley wrote: > > On Tue, Jun 20, 2023 at 10:55?AM Diego Magela Lemos via petsc-users > wrote: >> Considering, for instance, the following COO sparse matrix format, with repeated indices: >> >> std::vector rows{0, 0, 1, 2, 3, 4}; >> std::vector cols{0, 0, 1, 2, 3, 4}; >> std::vector values{2, -1, 2, 3, 4, 5}; >> >> that represents a 5x5 diagonal matrix A. >> >> So far, the code that I have is: >> >> // fill_in_matrix.cc >> >> static char help[] = "Fill in a parallel COO format sparse matrix."; >> >> #include >> #include >> >> int main(int argc, char **args) >> { >> Mat A; >> PetscInt m = 5, i, Istart, Iend; >> >> PetscCall(PetscInitialize(&argc, &args, NULL, help)); >> >> PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); >> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); >> PetscCall(MatSetFromOptions(A)); >> PetscCall(MatSetUp(A)); >> PetscCall(MatGetOwnershipRange(A, &Istart, &Iend)); >> >> std::vector II{0, 0, 1, 2, 3, 4}; >> std::vector JJ{0, 0, 1, 2, 3, 4}; >> std::vector XX{2, -1, 2, 3, 4, 5}; >> >> for (i = Istart; i < Iend; i++) >> PetscCall(MatSetValues(A, 1, &II.at(i), 1, &JJ.at(i), &XX.at(i), ADD_VALUES)); >> >> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); >> >> PetscCall(MatDestroy(&A)); >> PetscCall(PetscFinalize()); >> return 0; >> } >> When running it with >> >> petscmpiexec -n 4 ./fill_in_matrix >> >> I get >> >> Mat Object: 4 MPI processes >> type: mpiaij >> row 0: (0, 1.) >> row 1: (1, 2.) >> row 2: (2, 3.) >> row 3: (3, 4.) >> row 4: >> >> Which is missing the entry of the last row. >> >> What am I missing? Even better, which would be the best way to fill in this matrix? > We have a new interface for this: > > https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/ > > Thanks, > > Matt > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Tue Jun 20 11:36:20 2023 From: liufield at gmail.com (neil liu) Date: Tue, 20 Jun 2023 12:36:20 -0400 Subject: [petsc-users] Inquiry about the c++ destructor and PetscFinalize. In-Reply-To: <53047c60-78b4-3c7f-5b62-927d9c47e294@alaska.edu> References: <53047c60-78b4-3c7f-5b62-927d9c47e294@alaska.edu> Message-ID: Thanks a lot, Constantine. It works pretty well. On Fri, Jun 16, 2023 at 6:52?PM Constantine Khrulev wrote: > In your code the destructor of DMManage is called at the end of scope, > i.e. after the PetscFinalize() call. > > You should be able to avoid this error by putting "DMManage objDMManage" > in a code block to limit its scope and ensure that it is destroyed > before PETSc is finalized: > > int main(int argc, char** argv) { > PetscFunctionBeginUser; > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); > > { > DMManage objDMManage; > } // objDMManage is destroyed here > > PetscFinalize(); > return 0; > } > > On 6/16/23 14:13, neil liu wrote: > > Dear Petsc developers, > > > > I am trying to use Petsc with C++. And came across one issue. > > Class DMManage has been defined, one default constructor and > > destructor has been defined there. > > The code has a runtime error, "double free or corruption". Finally I > > found that, this is due to PetscFinalize. If I called explicitly the > > destructor before this PetscFinalze, the error will disappear. > > > > Does that mean PetscFinalize do some work to destroy DM? > > > > Thanks, > > > > #include > > #include > > #include > > #include > > > > class DMManage{ > > PetscSF distributionSF; > > public: > > DM dm; > > DMManage(); > > ~DMManage(); > > }; > > > > DMManage::DMManage(){ > > const char filename[] = "ParallelWaveguide.msh"; > > DM dmDist; > > PetscViewer viewer; > > PetscViewerCreate(PETSC_COMM_WORLD, &viewer); > > PetscViewerSetType(viewer, PETSCVIEWERASCII); > > PetscViewerFileSetMode(viewer, FILE_MODE_READ); > > PetscViewerFileSetName(viewer, filename); > > DMPlexCreateGmsh(PETSC_COMM_WORLD, viewer, PETSC_TRUE, &dm); > > PetscViewerDestroy(&viewer); > > PetscInt overlap = 0; > > DMPlexDistribute(dm, overlap, &distributionSF, &dmDist); > > std::cout<<&dm< > if (dmDist) { > > DMDestroy(&dm); > > dm = dmDist; > > } > > DMDestroy(&dmDist); > > } > > > > DMManage::~DMManage(){ > > DMDestroy(&dm); > > } > > > > int main(int argc, char** argv) { > > PetscFunctionBeginUser; > > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); > > > > DMManage objDMManage; > > > > PetscFinalize(); > > return 0; > > } > > -- > Constantine > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From diegomagela at usp.br Tue Jun 20 12:42:57 2023 From: diegomagela at usp.br (Diego Magela Lemos) Date: Tue, 20 Jun 2023 14:42:57 -0300 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> Message-ID: Using all recommended approaches it worked! Thank you all in advance. Now, I'm facing problems when solving a linear system using each approach. *COO approach* Using MatSetPreallocationCOO and then MatSetValuesCOO, I'm able to fill in the matrix when running with 1 MPI process. But, if I run with more than one MPI process, the values entries are multiplied by the number of MPI processes being used. Is this behavior correct? Consider the following code: // fill_in_matrix.cc static char help[] = "Fill in a parallel COO format sparse matrix."; #include #include int main(int argc, char **args) { std::vector coo_i{0, 0, 1, 2, 3, 4}; std::vector coo_j{0, 0, 1, 2, 3, 4}; std::vector coo_v{2, -1, 2, 3, 4, 5}; Mat A; PetscInt size = 5; PetscCall(PetscInitialize(&argc, &args, NULL, help)); // Create matrix PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); PetscCall(MatSetFromOptions(A)); // Populate matrix PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(), coo_j.data())); PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES)); // View matrix PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); PetscCall(MatDestroy(&A)); PetscCall(PetscFinalize()); return 0; } When running with petscmpiexec -n 1 ./fill_in_matrix, I got Mat Object: 1 MPI process > type: seqaij > row 0: (0, 1.) > row 1: (1, 2.) > row 2: (2, 3.) > row 3: (3, 4.) > row 4: (4, 5.) Which is a correct result. But, when running it with petscmpiexec -n 2 ./fill_in_matrix, I get Mat Object: 2 MPI process > type: mpiaij > row 0: (0, 2.) > row 1: (1, 4.) > row 2: (2, 6.) > row 3: (3, 8.) > row 4: (4, 10.) The matrix entries are multiplied by 2, that is, the number of processes used to execute the code. *MatSetValues approach* I obtain the same behavior when filling in the matrix by using MatSetValues static char help[] = "Fill in a parallel COO format sparse matrix."; // fill_in_matrix.cc #include #include int main(int argc, char **args) { std::vector coo_i{0, 0, 1, 2, 3, 4}; std::vector coo_j{0, 0, 1, 2, 3, 4}; std::vector coo_v{2, -1, 2, 3, 4, 5}; Mat A; PetscInt size = 5; PetscCall(PetscInitialize(&argc, &args, NULL, help)); // Create matrix PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); PetscCall(MatSetFromOptions(A)); // Populate matrix for (size_t i = 0; i < coo_v.size(); i++) PetscCall(MatSetValues(A, 1, &coo_i.at(i), 1, &coo_j.at(i), & coo_v.at(i), ADD_VALUES)); PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); // View matrix PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); PetscCall(MatDestroy(&A)); PetscCall(PetscFinalize()); return 0; } When solving a linear system, I get the correct answer no matter the number of MPI processes when using MatSetValues approach. The same is not true when using COO approach, whose result is only correct when using 1 MPI process. static char help[] = "Fill in a parallel COO format sparse matrix and solve a linear system."; #include #include int main(int argc, char **args) { std::vector coo_i{0, 0, 1, 2, 3, 4}; std::vector coo_j{0, 0, 1, 2, 3, 4}; std::vector coo_v{2, -1, 2, 3, 4, 5}; Mat A; Vec B, X, U; KSP ksp; PC pc; PetscReal norm; // norm of solution error PetscInt its; PetscInt size = 5; PetscCall(PetscInitialize(&argc, &args, NULL, help)); // Create matrix PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); PetscCall(MatSetFromOptions(A)); // Populate matrix // COO PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(), coo_j.data())); PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES)); // MatSetValues for-loop // for (size_t i = 0; i < coo_v.size(); i++) // PetscCall(MatSetValues(A, 1, &coo_i.at(i), 1, &coo_j.at(i), & coo_v.at(i), ADD_VALUES)); // PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); // PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); // View matrix PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); // Create vector B PetscCall(VecCreate(PETSC_COMM_WORLD, &B)); PetscCall(VecSetSizes(B, PETSC_DECIDE, size)); PetscCall(VecSetFromOptions(B)); PetscCall(VecSetUp(B)); // Populate vector PetscCall(VecSetValues(B, coo_i.size(), coo_i.data(), coo_v.data(), ADD_VALUES)); PetscCall(VecAssemblyBegin(B)); PetscCall(VecAssemblyEnd(B)); // View vector PetscCall(VecView(B, PETSC_VIEWER_STDERR_WORLD)); // Define solution and auxiliary vector PetscCall(VecDuplicate(B, &X)); PetscCall(VecDuplicate(B, &U)); PetscCall(VecSet(U, 1.0)); // Create solver PetscCall(KSPCreate(PETSC_COMM_WORLD, &ksp)); PetscCall(KSPSetOperators(ksp, A, A)); PetscCall(KSPGetPC(ksp, &pc)); PetscCall(PCSetType(pc, PCJACOBI)); PetscCall(KSPSetFromOptions(ksp)); PetscCall(KSPSetTolerances(ksp, 1.e-5, PETSC_DEFAULT, PETSC_DEFAULT, PETSC_DEFAULT)); // Solve PetscCall(KSPSolve(ksp, B, X)); // View solution vector PetscCall(VecView(X, PETSC_VIEWER_STDERR_WORLD)); // Verify the solution PetscCall(VecAXPY(X, -1.0, U)); PetscCall(VecNorm(X, NORM_2, &norm)); PetscCall(KSPGetIterationNumber(ksp, &its)); PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Norm of error %g, Iterations %" PetscInt_FMT "\n", (double)norm, its)); PetscCall(MatDestroy(&A)); PetscCall(VecDestroy(&B)); PetscCall(VecDestroy(&X)); PetscCall(VecDestroy(&U)); PetscCall(KSPDestroy(&ksp)); PetscCall(PetscFinalize()); return 0; } Why am I getting wrong results using the COO approach with more than one MPI process? Em ter., 20 de jun. de 2023 ?s 13:13, Barry Smith escreveu: > > Since you have 6 entries that needed to be added to the matrix you will > need to call MatSetValues() six time for the six entries. > > On Jun 20, 2023, at 11:06 AM, Matthew Knepley wrote: > > On Tue, Jun 20, 2023 at 10:55?AM Diego Magela Lemos via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Considering, for instance, the following COO sparse matrix format, with >> repeated indices: >> >> std::vector rows{0, 0, 1, 2, 3, 4}; >> std::vector cols{0, 0, 1, 2, 3, 4}; >> std::vector values{2, -1, 2, 3, 4, 5}; >> >> that represents a 5x5 diagonal matrix A. >> >> So far, the code that I have is: >> >> // fill_in_matrix.cc >> static char help[] = "Fill in a parallel COO format sparse matrix."; >> #include #include >> int main(int argc, char **args){ >> Mat A; >> PetscInt m = 5, i, Istart, Iend; >> >> PetscCall(PetscInitialize(&argc, &args, NULL, help)); >> >> PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); >> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); >> PetscCall(MatSetFromOptions(A)); >> PetscCall(MatSetUp(A)); >> PetscCall(MatGetOwnershipRange(A, &Istart, &Iend)); >> >> std::vector II{0, 0, 1, 2, 3, 4}; >> std::vector JJ{0, 0, 1, 2, 3, 4}; >> std::vector XX{2, -1, 2, 3, 4, 5}; >> >> for (i = Istart; i < Iend; i++) >> PetscCall(MatSetValues(A, 1, &II.at(i), 1, &JJ.at(i), &XX.at(i), ADD_VALUES)); >> >> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); >> >> PetscCall(MatDestroy(&A)); >> PetscCall(PetscFinalize()); >> return 0; >> } >> >> When running it with >> >> petscmpiexec -n 4 ./fill_in_matrix >> >> >> I get >> >> >> Mat Object: 4 MPI processes >> >> type: mpiaij >> row 0: (0, 1.) >> row 1: (1, 2.) >> row 2: (2, 3.) >> row 3: (3, 4.) >> row 4: >> >> >> Which is missing the entry of the last row. >> What am I missing? Even better, which would be the best way to fill in this matrix? >> >> We have a new interface for this: > > https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/ > > Thanks, > > Matt > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jun 20 12:50:18 2023 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 20 Jun 2023 13:50:18 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> Message-ID: On Tue, Jun 20, 2023 at 1:43?PM Diego Magela Lemos wrote: > Using all recommended approaches it worked! > Thank you all in advance. > > Now, I'm facing problems when solving a linear system using each approach. > > *COO approach* > I can answer this one. > Using MatSetPreallocationCOO and then MatSetValuesCOO, I'm able to fill > in the matrix when running with 1 MPI process. > But, if I run with more than one MPI process, the values entries are > multiplied by the number of MPI processes being used. > Is this behavior correct? > > Consider the following code: > > // fill_in_matrix.cc > > static char help[] = "Fill in a parallel COO format sparse matrix."; > > #include > #include > > int main(int argc, char **args) > { > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > Mat A; > > PetscInt size = 5; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Create matrix > PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); > PetscCall(MatSetFromOptions(A)); > > // Populate matrix > PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(), > coo_j.data())); > PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES)); > > // View matrix > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > PetscCall(MatDestroy(&A)); > > PetscCall(PetscFinalize()); > return 0; > } > > When running with petscmpiexec -n 1 ./fill_in_matrix, I got > > Mat Object: 1 MPI process >> type: seqaij >> row 0: (0, 1.) >> row 1: (1, 2.) >> row 2: (2, 3.) >> row 3: (3, 4.) >> row 4: (4, 5.) > > > Which is a correct result. But, when running it with petscmpiexec -n 2 > ./fill_in_matrix, I get > > Mat Object: 2 MPI process >> type: mpiaij >> row 0: (0, 2.) >> row 1: (1, 4.) >> row 2: (2, 6.) >> row 3: (3, 8.) >> row 4: (4, 10.) > > > The matrix entries are multiplied by 2, that is, the number of processes > used to execute the code. > No. This was mostly intended for GPUs, where there is 1 process. If you want to use multiple MPI processes, then each process can only introduce some disjoint subset of the values. This is also how MatSetValues() works, but it might not be as obvious. Thanks, Matt > *MatSetValues approach* > > I obtain the same behavior when filling in the matrix by using MatSetValues > > static char help[] = "Fill in a parallel COO format sparse matrix."; > > // fill_in_matrix.cc > > #include > #include > > int main(int argc, char **args) > { > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > Mat A; > PetscInt size = 5; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Create matrix > PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); > PetscCall(MatSetFromOptions(A)); > > // Populate matrix > for (size_t i = 0; i < coo_v.size(); i++) > PetscCall(MatSetValues(A, 1, &coo_i.at(i), 1, &coo_j.at(i), & > coo_v.at(i), ADD_VALUES)); > > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > > // View matrix > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > PetscCall(MatDestroy(&A)); > > PetscCall(PetscFinalize()); > return 0; > } > > When solving a linear system, I get the correct answer no matter the > number of MPI processes when using MatSetValues approach. > The same is not true when using COO approach, whose result is only > correct when using 1 MPI process. > > static char help[] = "Fill in a parallel COO format sparse matrix and > solve a linear system."; > > #include > #include > > int main(int argc, char **args) > { > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > Mat A; > Vec B, X, U; > KSP ksp; > PC pc; > PetscReal norm; // norm of solution error > PetscInt its; > > PetscInt size = 5; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Create matrix > > PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); > PetscCall(MatSetFromOptions(A)); > > > // Populate matrix > > // COO > PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(), > coo_j.data())); > PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES)); > > // MatSetValues for-loop > // for (size_t i = 0; i < coo_v.size(); i++) > // PetscCall(MatSetValues(A, 1, &coo_i.at(i), 1, &coo_j.at(i), & > coo_v.at(i), ADD_VALUES)); > > // PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > // PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > > // View matrix > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > // Create vector B > PetscCall(VecCreate(PETSC_COMM_WORLD, &B)); > PetscCall(VecSetSizes(B, PETSC_DECIDE, size)); > PetscCall(VecSetFromOptions(B)); > PetscCall(VecSetUp(B)); > > // Populate vector > PetscCall(VecSetValues(B, coo_i.size(), coo_i.data(), coo_v.data(), > ADD_VALUES)); > PetscCall(VecAssemblyBegin(B)); > PetscCall(VecAssemblyEnd(B)); > > // View vector > PetscCall(VecView(B, PETSC_VIEWER_STDERR_WORLD)); > > // Define solution and auxiliary vector > PetscCall(VecDuplicate(B, &X)); > PetscCall(VecDuplicate(B, &U)); > PetscCall(VecSet(U, 1.0)); > > // Create solver > PetscCall(KSPCreate(PETSC_COMM_WORLD, &ksp)); > PetscCall(KSPSetOperators(ksp, A, A)); > PetscCall(KSPGetPC(ksp, &pc)); > PetscCall(PCSetType(pc, PCJACOBI)); > PetscCall(KSPSetFromOptions(ksp)); > PetscCall(KSPSetTolerances(ksp, 1.e-5, PETSC_DEFAULT, PETSC_DEFAULT, > PETSC_DEFAULT)); > > // Solve > PetscCall(KSPSolve(ksp, B, X)); > > // View solution vector > PetscCall(VecView(X, PETSC_VIEWER_STDERR_WORLD)); > > // Verify the solution > PetscCall(VecAXPY(X, -1.0, U)); > PetscCall(VecNorm(X, NORM_2, &norm)); > PetscCall(KSPGetIterationNumber(ksp, &its)); > PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Norm of error %g, Iterations > %" PetscInt_FMT "\n", (double)norm, its)); > > PetscCall(MatDestroy(&A)); > PetscCall(VecDestroy(&B)); > PetscCall(VecDestroy(&X)); > PetscCall(VecDestroy(&U)); > PetscCall(KSPDestroy(&ksp)); > > PetscCall(PetscFinalize()); > return 0; > } > > Why am I getting wrong results using the COO approach with more than one > MPI process? > > Em ter., 20 de jun. de 2023 ?s 13:13, Barry Smith > escreveu: > >> >> Since you have 6 entries that needed to be added to the matrix you will >> need to call MatSetValues() six time for the six entries. >> >> On Jun 20, 2023, at 11:06 AM, Matthew Knepley wrote: >> >> On Tue, Jun 20, 2023 at 10:55?AM Diego Magela Lemos via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >>> Considering, for instance, the following COO sparse matrix format, with >>> repeated indices: >>> >>> std::vector rows{0, 0, 1, 2, 3, 4}; >>> std::vector cols{0, 0, 1, 2, 3, 4}; >>> std::vector values{2, -1, 2, 3, 4, 5}; >>> >>> that represents a 5x5 diagonal matrix A. >>> >>> So far, the code that I have is: >>> >>> // fill_in_matrix.cc >>> static char help[] = "Fill in a parallel COO format sparse matrix."; >>> #include #include >>> int main(int argc, char **args){ >>> Mat A; >>> PetscInt m = 5, i, Istart, Iend; >>> >>> PetscCall(PetscInitialize(&argc, &args, NULL, help)); >>> >>> PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); >>> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); >>> PetscCall(MatSetFromOptions(A)); >>> PetscCall(MatSetUp(A)); >>> PetscCall(MatGetOwnershipRange(A, &Istart, &Iend)); >>> >>> std::vector II{0, 0, 1, 2, 3, 4}; >>> std::vector JJ{0, 0, 1, 2, 3, 4}; >>> std::vector XX{2, -1, 2, 3, 4, 5}; >>> >>> for (i = Istart; i < Iend; i++) >>> PetscCall(MatSetValues(A, 1, &II.at(i), 1, &JJ.at(i), &XX.at(i), ADD_VALUES)); >>> >>> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); >>> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); >>> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); >>> >>> PetscCall(MatDestroy(&A)); >>> PetscCall(PetscFinalize()); >>> return 0; >>> } >>> >>> When running it with >>> >>> petscmpiexec -n 4 ./fill_in_matrix >>> >>> >>> I get >>> >>> >>> Mat Object: 4 MPI processes >>> >>> type: mpiaij >>> row 0: (0, 1.) >>> row 1: (1, 2.) >>> row 2: (2, 3.) >>> row 3: (3, 4.) >>> row 4: >>> >>> >>> Which is missing the entry of the last row. >>> What am I missing? Even better, which would be the best way to fill in this matrix? >>> >>> We have a new interface for this: >> >> https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/ >> >> Thanks, >> >> Matt >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Jun 20 12:50:53 2023 From: jed at jedbrown.org (Jed Brown) Date: Tue, 20 Jun 2023 11:50:53 -0600 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> Message-ID: <87h6r2rpj6.fsf@jedbrown.org> You should partition the entries so each entry is submitted by only one process. Note that duplicate entries (on the same or different proceses) are summed as you've seen. For example, in finite elements, it's typical to partition the elements and each process submits entries from its elements. Diego Magela Lemos via petsc-users writes: > Using all recommended approaches it worked! > Thank you all in advance. > > Now, I'm facing problems when solving a linear system using each approach. > > *COO approach* > > Using MatSetPreallocationCOO and then MatSetValuesCOO, I'm able to fill in > the matrix when running with 1 MPI process. > But, if I run with more than one MPI process, the values entries are > multiplied by the number of MPI processes being used. > Is this behavior correct? > > Consider the following code: > > // fill_in_matrix.cc > > static char help[] = "Fill in a parallel COO format sparse matrix."; > > #include > #include > > int main(int argc, char **args) > { > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > Mat A; > > PetscInt size = 5; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Create matrix > PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); > PetscCall(MatSetFromOptions(A)); > > // Populate matrix > PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(), > coo_j.data())); > PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES)); > > // View matrix > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > PetscCall(MatDestroy(&A)); > > PetscCall(PetscFinalize()); > return 0; > } > > When running with petscmpiexec -n 1 ./fill_in_matrix, I got > > Mat Object: 1 MPI process >> type: seqaij >> row 0: (0, 1.) >> row 1: (1, 2.) >> row 2: (2, 3.) >> row 3: (3, 4.) >> row 4: (4, 5.) > > > Which is a correct result. But, when running it with petscmpiexec -n 2 > ./fill_in_matrix, I get > > Mat Object: 2 MPI process >> type: mpiaij >> row 0: (0, 2.) >> row 1: (1, 4.) >> row 2: (2, 6.) >> row 3: (3, 8.) >> row 4: (4, 10.) > > > The matrix entries are multiplied by 2, that is, the number of processes > used to execute the code. > > *MatSetValues approach* > > I obtain the same behavior when filling in the matrix by using MatSetValues > > static char help[] = "Fill in a parallel COO format sparse matrix."; > > // fill_in_matrix.cc > > #include > #include > > int main(int argc, char **args) > { > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > Mat A; > PetscInt size = 5; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Create matrix > PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); > PetscCall(MatSetFromOptions(A)); > > // Populate matrix > for (size_t i = 0; i < coo_v.size(); i++) > PetscCall(MatSetValues(A, 1, &coo_i.at(i), 1, &coo_j.at(i), & > coo_v.at(i), ADD_VALUES)); > > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > > // View matrix > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > PetscCall(MatDestroy(&A)); > > PetscCall(PetscFinalize()); > return 0; > } > > When solving a linear system, I get the correct answer no matter the number > of MPI processes when using MatSetValues approach. > The same is not true when using COO approach, whose result is only correct > when using 1 MPI process. > > static char help[] = "Fill in a parallel COO format sparse matrix and solve > a linear system."; > > #include > #include > > int main(int argc, char **args) > { > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > Mat A; > Vec B, X, U; > KSP ksp; > PC pc; > PetscReal norm; // norm of solution error > PetscInt its; > > PetscInt size = 5; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Create matrix > > PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size)); > PetscCall(MatSetFromOptions(A)); > > > // Populate matrix > > // COO > PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(), > coo_j.data())); > PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES)); > > // MatSetValues for-loop > // for (size_t i = 0; i < coo_v.size(); i++) > // PetscCall(MatSetValues(A, 1, &coo_i.at(i), 1, &coo_j.at(i), & > coo_v.at(i), ADD_VALUES)); > > // PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > // PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > > // View matrix > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > // Create vector B > PetscCall(VecCreate(PETSC_COMM_WORLD, &B)); > PetscCall(VecSetSizes(B, PETSC_DECIDE, size)); > PetscCall(VecSetFromOptions(B)); > PetscCall(VecSetUp(B)); > > // Populate vector > PetscCall(VecSetValues(B, coo_i.size(), coo_i.data(), coo_v.data(), > ADD_VALUES)); > PetscCall(VecAssemblyBegin(B)); > PetscCall(VecAssemblyEnd(B)); > > // View vector > PetscCall(VecView(B, PETSC_VIEWER_STDERR_WORLD)); > > // Define solution and auxiliary vector > PetscCall(VecDuplicate(B, &X)); > PetscCall(VecDuplicate(B, &U)); > PetscCall(VecSet(U, 1.0)); > > // Create solver > PetscCall(KSPCreate(PETSC_COMM_WORLD, &ksp)); > PetscCall(KSPSetOperators(ksp, A, A)); > PetscCall(KSPGetPC(ksp, &pc)); > PetscCall(PCSetType(pc, PCJACOBI)); > PetscCall(KSPSetFromOptions(ksp)); > PetscCall(KSPSetTolerances(ksp, 1.e-5, PETSC_DEFAULT, PETSC_DEFAULT, > PETSC_DEFAULT)); > > // Solve > PetscCall(KSPSolve(ksp, B, X)); > > // View solution vector > PetscCall(VecView(X, PETSC_VIEWER_STDERR_WORLD)); > > // Verify the solution > PetscCall(VecAXPY(X, -1.0, U)); > PetscCall(VecNorm(X, NORM_2, &norm)); > PetscCall(KSPGetIterationNumber(ksp, &its)); > PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Norm of error %g, Iterations > %" PetscInt_FMT "\n", (double)norm, its)); > > PetscCall(MatDestroy(&A)); > PetscCall(VecDestroy(&B)); > PetscCall(VecDestroy(&X)); > PetscCall(VecDestroy(&U)); > PetscCall(KSPDestroy(&ksp)); > > PetscCall(PetscFinalize()); > return 0; > } > > Why am I getting wrong results using the COO approach with more than one > MPI process? > > Em ter., 20 de jun. de 2023 ?s 13:13, Barry Smith > escreveu: > >> >> Since you have 6 entries that needed to be added to the matrix you will >> need to call MatSetValues() six time for the six entries. >> >> On Jun 20, 2023, at 11:06 AM, Matthew Knepley wrote: >> >> On Tue, Jun 20, 2023 at 10:55?AM Diego Magela Lemos via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >>> Considering, for instance, the following COO sparse matrix format, with >>> repeated indices: >>> >>> std::vector rows{0, 0, 1, 2, 3, 4}; >>> std::vector cols{0, 0, 1, 2, 3, 4}; >>> std::vector values{2, -1, 2, 3, 4, 5}; >>> >>> that represents a 5x5 diagonal matrix A. >>> >>> So far, the code that I have is: >>> >>> // fill_in_matrix.cc >>> static char help[] = "Fill in a parallel COO format sparse matrix."; >>> #include #include >>> int main(int argc, char **args){ >>> Mat A; >>> PetscInt m = 5, i, Istart, Iend; >>> >>> PetscCall(PetscInitialize(&argc, &args, NULL, help)); >>> >>> PetscCall(MatCreate(PETSC_COMM_WORLD, &A)); >>> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); >>> PetscCall(MatSetFromOptions(A)); >>> PetscCall(MatSetUp(A)); >>> PetscCall(MatGetOwnershipRange(A, &Istart, &Iend)); >>> >>> std::vector II{0, 0, 1, 2, 3, 4}; >>> std::vector JJ{0, 0, 1, 2, 3, 4}; >>> std::vector XX{2, -1, 2, 3, 4, 5}; >>> >>> for (i = Istart; i < Iend; i++) >>> PetscCall(MatSetValues(A, 1, &II.at(i), 1, &JJ.at(i), &XX.at(i), ADD_VALUES)); >>> >>> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); >>> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); >>> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); >>> >>> PetscCall(MatDestroy(&A)); >>> PetscCall(PetscFinalize()); >>> return 0; >>> } >>> >>> When running it with >>> >>> petscmpiexec -n 4 ./fill_in_matrix >>> >>> >>> I get >>> >>> >>> Mat Object: 4 MPI processes >>> >>> type: mpiaij >>> row 0: (0, 1.) >>> row 1: (1, 2.) >>> row 2: (2, 3.) >>> row 3: (3, 4.) >>> row 4: >>> >>> >>> Which is missing the entry of the last row. >>> What am I missing? Even better, which would be the best way to fill in this matrix? >>> >>> We have a new interface for this: >> >> https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/ >> >> Thanks, >> >> Matt >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> From jed at jedbrown.org Tue Jun 20 12:56:52 2023 From: jed at jedbrown.org (Jed Brown) Date: Tue, 20 Jun 2023 11:56:52 -0600 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> Message-ID: <87edm6rp97.fsf@jedbrown.org> Matthew Knepley writes: >> The matrix entries are multiplied by 2, that is, the number of processes >> used to execute the code. >> > > No. This was mostly intended for GPUs, where there is 1 process. If you > want to use multiple MPI processes, then each process can only introduce > some disjoint subset of the values. This is also how MatSetValues() works, > but it might not be as obvious. They need not be disjoint, just sum to the expected values. This interface is very convenient for FE and FV methods. MatSetValues with ADD_VALUES has similar semantics without the intermediate storage, but it forces you to submit one element matrix at a time. Classic parallelism granularity versus memory use tradeoff with MatSetValuesCOO being a clear win on GPUs and more nuanced for CPUs. From diegomagela at usp.br Tue Jun 20 13:02:46 2023 From: diegomagela at usp.br (Diego Magela Lemos) Date: Tue, 20 Jun 2023 15:02:46 -0300 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: <87edm6rp97.fsf@jedbrown.org> References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: So... what do I need to do, please? Why am I getting wrong results when solving the linear system if the matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown escreveu: > Matthew Knepley writes: > > >> The matrix entries are multiplied by 2, that is, the number of processes > >> used to execute the code. > >> > > > > No. This was mostly intended for GPUs, where there is 1 process. If you > > want to use multiple MPI processes, then each process can only introduce > > some disjoint subset of the values. This is also how MatSetValues() > works, > > but it might not be as obvious. > > They need not be disjoint, just sum to the expected values. This interface > is very convenient for FE and FV methods. MatSetValues with ADD_VALUES has > similar semantics without the intermediate storage, but it forces you to > submit one element matrix at a time. Classic parallelism granularity versus > memory use tradeoff with MatSetValuesCOO being a clear win on GPUs and more > nuanced for CPUs. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jun 20 13:07:49 2023 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 20 Jun 2023 14:07:49 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos wrote: > So... what do I need to do, please? > Why am I getting wrong results when solving the linear system if the > matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? > It appears that you have _all_ processes submit _all_ triples (i, j, v). Each triple can only be submitted by a single process. You can fix this in many ways. For example, an easy but suboptimal way is just to have process 0 submit them all, and all other processes submit nothing. Thanks, Matt > Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown > escreveu: > >> Matthew Knepley writes: >> >> >> The matrix entries are multiplied by 2, that is, the number of >> processes >> >> used to execute the code. >> >> >> > >> > No. This was mostly intended for GPUs, where there is 1 process. If you >> > want to use multiple MPI processes, then each process can only introduce >> > some disjoint subset of the values. This is also how MatSetValues() >> works, >> > but it might not be as obvious. >> >> They need not be disjoint, just sum to the expected values. This >> interface is very convenient for FE and FV methods. MatSetValues with >> ADD_VALUES has similar semantics without the intermediate storage, but it >> forces you to submit one element matrix at a time. Classic parallelism >> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >> win on GPUs and more nuanced for CPUs. >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From diegomagela at usp.br Wed Jun 21 08:22:05 2023 From: diegomagela at usp.br (Diego Magela Lemos) Date: Wed, 21 Jun 2023 10:22:05 -0300 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: Please, could you provide a minimal working example (or link) of how to do this? Thank you. Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley escreveu: > On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos > wrote: > >> So... what do I need to do, please? >> Why am I getting wrong results when solving the linear system if the >> matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? >> > > It appears that you have _all_ processes submit _all_ triples (i, j, v). > Each triple can only be submitted by a single process. You can fix this in > many ways. For example, an easy but suboptimal way is just to have process > 0 submit them all, and all other processes submit nothing. > > Thanks, > > Matt > > >> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown >> escreveu: >> >>> Matthew Knepley writes: >>> >>> >> The matrix entries are multiplied by 2, that is, the number of >>> processes >>> >> used to execute the code. >>> >> >>> > >>> > No. This was mostly intended for GPUs, where there is 1 process. If you >>> > want to use multiple MPI processes, then each process can only >>> introduce >>> > some disjoint subset of the values. This is also how MatSetValues() >>> works, >>> > but it might not be as obvious. >>> >>> They need not be disjoint, just sum to the expected values. This >>> interface is very convenient for FE and FV methods. MatSetValues with >>> ADD_VALUES has similar semantics without the intermediate storage, but it >>> forces you to submit one element matrix at a time. Classic parallelism >>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >>> win on GPUs and more nuanced for CPUs. >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jun 21 08:49:49 2023 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 21 Jun 2023 09:49:49 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: On Wed, Jun 21, 2023 at 9:22?AM Diego Magela Lemos wrote: > Please, could you provide a minimal working example (or link) of how to do > this? > You can see here https://petsc.org/main/src/ksp/ksp/tutorials/ex2.c.html that each process only sets values for the rows it owns. Thanks, Matt > Thank you. > > Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley > escreveu: > >> On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos >> wrote: >> >>> So... what do I need to do, please? >>> Why am I getting wrong results when solving the linear system if the >>> matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? >>> >> >> It appears that you have _all_ processes submit _all_ triples (i, j, v). >> Each triple can only be submitted by a single process. You can fix this in >> many ways. For example, an easy but suboptimal way is just to have process >> 0 submit them all, and all other processes submit nothing. >> >> Thanks, >> >> Matt >> >> >>> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown >>> escreveu: >>> >>>> Matthew Knepley writes: >>>> >>>> >> The matrix entries are multiplied by 2, that is, the number of >>>> processes >>>> >> used to execute the code. >>>> >> >>>> > >>>> > No. This was mostly intended for GPUs, where there is 1 process. If >>>> you >>>> > want to use multiple MPI processes, then each process can only >>>> introduce >>>> > some disjoint subset of the values. This is also how MatSetValues() >>>> works, >>>> > but it might not be as obvious. >>>> >>>> They need not be disjoint, just sum to the expected values. This >>>> interface is very convenient for FE and FV methods. MatSetValues with >>>> ADD_VALUES has similar semantics without the intermediate storage, but it >>>> forces you to submit one element matrix at a time. Classic parallelism >>>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >>>> win on GPUs and more nuanced for CPUs. >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jun 21 10:20:01 2023 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 21 Jun 2023 11:20:01 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: ex2 looks the same as the code at the beginning of the thread, which looks fine to me, yet fails. (the only thing I can think of is that &v.at(i) is not doing what one wants) Diego: I would start with this ex2.c, add your view statement, verify; incrementally change ex2 to your syntax and see where it breaks. Mark On Wed, Jun 21, 2023 at 9:50?AM Matthew Knepley wrote: > On Wed, Jun 21, 2023 at 9:22?AM Diego Magela Lemos > wrote: > >> Please, could you provide a minimal working example (or link) of how to >> do this? >> > > You can see here > > https://petsc.org/main/src/ksp/ksp/tutorials/ex2.c.html > > that each process only sets values for the rows it owns. > > Thanks, > > Matt > > >> Thank you. >> >> Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley >> escreveu: >> >>> On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos >>> wrote: >>> >>>> So... what do I need to do, please? >>>> Why am I getting wrong results when solving the linear system if the >>>> matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? >>>> >>> >>> It appears that you have _all_ processes submit _all_ triples (i, j, v). >>> Each triple can only be submitted by a single process. You can fix this in >>> many ways. For example, an easy but suboptimal way is just to have process >>> 0 submit them all, and all other processes submit nothing. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown >>>> escreveu: >>>> >>>>> Matthew Knepley writes: >>>>> >>>>> >> The matrix entries are multiplied by 2, that is, the number of >>>>> processes >>>>> >> used to execute the code. >>>>> >> >>>>> > >>>>> > No. This was mostly intended for GPUs, where there is 1 process. If >>>>> you >>>>> > want to use multiple MPI processes, then each process can only >>>>> introduce >>>>> > some disjoint subset of the values. This is also how MatSetValues() >>>>> works, >>>>> > but it might not be as obvious. >>>>> >>>>> They need not be disjoint, just sum to the expected values. This >>>>> interface is very convenient for FE and FV methods. MatSetValues with >>>>> ADD_VALUES has similar semantics without the intermediate storage, but it >>>>> forces you to submit one element matrix at a time. Classic parallelism >>>>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >>>>> win on GPUs and more nuanced for CPUs. >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From diegomagela at usp.br Wed Jun 21 11:57:25 2023 From: diegomagela at usp.br (Diego Magela Lemos) Date: Wed, 21 Jun 2023 13:57:25 -0300 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: Unfortunately, I can modify ex2 to perform the matrix fill in using only one rank, although I have understood how it works. Is it so hard to do that? Thank you. Em qua., 21 de jun. de 2023 ?s 12:20, Mark Adams escreveu: > ex2 looks the same as the code at the beginning of the thread, which looks > fine to me, yet fails. > (the only thing I can think of is that &v.at(i) is not doing what one > wants) > > Diego: I would start with this ex2.c, add your view statement, verify; > incrementally change ex2 to your syntax and see where it breaks. > > Mark > > On Wed, Jun 21, 2023 at 9:50?AM Matthew Knepley wrote: > >> On Wed, Jun 21, 2023 at 9:22?AM Diego Magela Lemos >> wrote: >> >>> Please, could you provide a minimal working example (or link) of how to >>> do this? >>> >> >> You can see here >> >> https://petsc.org/main/src/ksp/ksp/tutorials/ex2.c.html >> >> that each process only sets values for the rows it owns. >> >> Thanks, >> >> Matt >> >> >>> Thank you. >>> >>> Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley >>> escreveu: >>> >>>> On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos >>>> wrote: >>>> >>>>> So... what do I need to do, please? >>>>> Why am I getting wrong results when solving the linear system if the >>>>> matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? >>>>> >>>> >>>> It appears that you have _all_ processes submit _all_ triples (i, j, >>>> v). Each triple can only be submitted by a single process. You can fix this >>>> in many ways. For example, an easy but suboptimal way is just to have >>>> process 0 submit them all, and all other processes submit nothing. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown >>>>> escreveu: >>>>> >>>>>> Matthew Knepley writes: >>>>>> >>>>>> >> The matrix entries are multiplied by 2, that is, the number of >>>>>> processes >>>>>> >> used to execute the code. >>>>>> >> >>>>>> > >>>>>> > No. This was mostly intended for GPUs, where there is 1 process. If >>>>>> you >>>>>> > want to use multiple MPI processes, then each process can only >>>>>> introduce >>>>>> > some disjoint subset of the values. This is also how MatSetValues() >>>>>> works, >>>>>> > but it might not be as obvious. >>>>>> >>>>>> They need not be disjoint, just sum to the expected values. This >>>>>> interface is very convenient for FE and FV methods. MatSetValues with >>>>>> ADD_VALUES has similar semantics without the intermediate storage, but it >>>>>> forces you to submit one element matrix at a time. Classic parallelism >>>>>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >>>>>> win on GPUs and more nuanced for CPUs. >>>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jun 21 12:11:58 2023 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 21 Jun 2023 13:11:58 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: On Wed, Jun 21, 2023 at 12:57?PM Diego Magela Lemos wrote: > Unfortunately, I can modify ex2 to perform the matrix fill in using only > one rank, although I have understood how it works. > Is it so hard to do that? > It should not be. Maybe describe what is not clear? ex2 runs in parallel now. Thanks, Matt > Thank you. > > > Em qua., 21 de jun. de 2023 ?s 12:20, Mark Adams > escreveu: > >> ex2 looks the same as the code at the beginning of the thread, which >> looks fine to me, yet fails. >> (the only thing I can think of is that &v.at(i) is not doing what one >> wants) >> >> Diego: I would start with this ex2.c, add your view statement, verify; >> incrementally change ex2 to your syntax and see where it breaks. >> >> Mark >> >> On Wed, Jun 21, 2023 at 9:50?AM Matthew Knepley >> wrote: >> >>> On Wed, Jun 21, 2023 at 9:22?AM Diego Magela Lemos >>> wrote: >>> >>>> Please, could you provide a minimal working example (or link) of how to >>>> do this? >>>> >>> >>> You can see here >>> >>> https://petsc.org/main/src/ksp/ksp/tutorials/ex2.c.html >>> >>> that each process only sets values for the rows it owns. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thank you. >>>> >>>> Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley < >>>> knepley at gmail.com> escreveu: >>>> >>>>> On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos >>>>> wrote: >>>>> >>>>>> So... what do I need to do, please? >>>>>> Why am I getting wrong results when solving the linear system if the >>>>>> matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? >>>>>> >>>>> >>>>> It appears that you have _all_ processes submit _all_ triples (i, j, >>>>> v). Each triple can only be submitted by a single process. You can fix this >>>>> in many ways. For example, an easy but suboptimal way is just to have >>>>> process 0 submit them all, and all other processes submit nothing. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown >>>>>> escreveu: >>>>>> >>>>>>> Matthew Knepley writes: >>>>>>> >>>>>>> >> The matrix entries are multiplied by 2, that is, the number of >>>>>>> processes >>>>>>> >> used to execute the code. >>>>>>> >> >>>>>>> > >>>>>>> > No. This was mostly intended for GPUs, where there is 1 process. >>>>>>> If you >>>>>>> > want to use multiple MPI processes, then each process can only >>>>>>> introduce >>>>>>> > some disjoint subset of the values. This is also how >>>>>>> MatSetValues() works, >>>>>>> > but it might not be as obvious. >>>>>>> >>>>>>> They need not be disjoint, just sum to the expected values. This >>>>>>> interface is very convenient for FE and FV methods. MatSetValues with >>>>>>> ADD_VALUES has similar semantics without the intermediate storage, but it >>>>>>> forces you to submit one element matrix at a time. Classic parallelism >>>>>>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >>>>>>> win on GPUs and more nuanced for CPUs. >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From diegomagela at usp.br Wed Jun 21 13:16:01 2023 From: diegomagela at usp.br (Diego Magela Lemos) Date: Wed, 21 Jun 2023 15:16:01 -0300 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: So far, I've tried this: // fill_in_matrix.cc static char help[] = "Fill in a parallel COO format sparse matrix."; #include #include int main(int argc, char **args) { MPI_Comm comm; Mat A; PetscInt m = 5; PetscMPIInt rank, size; PetscCall(PetscInitialize(&argc, &args, NULL, help)); comm = PETSC_COMM_WORLD; PetscCallMPI(MPI_Comm_rank(comm, &rank)); PetscCallMPI(MPI_Comm_size(comm, &size)); PetscCall(MatCreate(comm, &A)); PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); PetscCall(MatSetFromOptions(A)); PetscCall(MatSetUp(A)); std::vector coo_i{0, 0, 1, 2, 3, 4}; std::vector coo_j{0, 0, 1, 2, 3, 4}; std::vector coo_v{2, -1, 2, 3, 4, 5}; PetscCallMPI(MPI_Comm_rank(comm, &rank)); if (rank == 0) { for (size_t j = 0; j < coo_i.size(); j++) PetscCall(MatSetValues(A, 1, &coo_i.at(j), 1, &coo_j.at(j), &coo_v.at(j), ADD_VALUES)); PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); } PetscCall(MatDestroy(&A)); PetscCall(PetscFinalize()); return 0; } When running with 1 process, it runs like a charm, but when running with more than one process, the code does not finish. Somehow it gets stuck forever. Em qua., 21 de jun. de 2023 ?s 14:12, Matthew Knepley escreveu: > On Wed, Jun 21, 2023 at 12:57?PM Diego Magela Lemos > wrote: > >> Unfortunately, I can modify ex2 to perform the matrix fill in using only >> one rank, although I have understood how it works. >> Is it so hard to do that? >> > > It should not be. Maybe describe what is not clear? ex2 runs in parallel > now. > > Thanks, > > Matt > > >> Thank you. >> >> >> Em qua., 21 de jun. de 2023 ?s 12:20, Mark Adams >> escreveu: >> >>> ex2 looks the same as the code at the beginning of the thread, which >>> looks fine to me, yet fails. >>> (the only thing I can think of is that &v.at(i) is not doing what one >>> wants) >>> >>> Diego: I would start with this ex2.c, add your view statement, verify; >>> incrementally change ex2 to your syntax and see where it breaks. >>> >>> Mark >>> >>> On Wed, Jun 21, 2023 at 9:50?AM Matthew Knepley >>> wrote: >>> >>>> On Wed, Jun 21, 2023 at 9:22?AM Diego Magela Lemos >>>> wrote: >>>> >>>>> Please, could you provide a minimal working example (or link) of how >>>>> to do this? >>>>> >>>> >>>> You can see here >>>> >>>> https://petsc.org/main/src/ksp/ksp/tutorials/ex2.c.html >>>> >>>> that each process only sets values for the rows it owns. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thank you. >>>>> >>>>> Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley < >>>>> knepley at gmail.com> escreveu: >>>>> >>>>>> On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos < >>>>>> diegomagela at usp.br> wrote: >>>>>> >>>>>>> So... what do I need to do, please? >>>>>>> Why am I getting wrong results when solving the linear system if the >>>>>>> matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? >>>>>>> >>>>>> >>>>>> It appears that you have _all_ processes submit _all_ triples (i, j, >>>>>> v). Each triple can only be submitted by a single process. You can fix this >>>>>> in many ways. For example, an easy but suboptimal way is just to have >>>>>> process 0 submit them all, and all other processes submit nothing. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown >>>>>>> escreveu: >>>>>>> >>>>>>>> Matthew Knepley writes: >>>>>>>> >>>>>>>> >> The matrix entries are multiplied by 2, that is, the number of >>>>>>>> processes >>>>>>>> >> used to execute the code. >>>>>>>> >> >>>>>>>> > >>>>>>>> > No. This was mostly intended for GPUs, where there is 1 process. >>>>>>>> If you >>>>>>>> > want to use multiple MPI processes, then each process can only >>>>>>>> introduce >>>>>>>> > some disjoint subset of the values. This is also how >>>>>>>> MatSetValues() works, >>>>>>>> > but it might not be as obvious. >>>>>>>> >>>>>>>> They need not be disjoint, just sum to the expected values. This >>>>>>>> interface is very convenient for FE and FV methods. MatSetValues with >>>>>>>> ADD_VALUES has similar semantics without the intermediate storage, but it >>>>>>>> forces you to submit one element matrix at a time. Classic parallelism >>>>>>>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >>>>>>>> win on GPUs and more nuanced for CPUs. >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jun 21 13:23:09 2023 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 21 Jun 2023 14:23:09 -0400 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: These routines are marked as collective in their manual pages and must be called by all MPI processes that share the matrix A. PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > On Jun 21, 2023, at 2:16 PM, Diego Magela Lemos via petsc-users wrote: > > So far, I've tried this: > > // fill_in_matrix.cc > > static char help[] = "Fill in a parallel COO format sparse matrix."; > > #include > #include > > int main(int argc, char **args) > { > MPI_Comm comm; > Mat A; > PetscInt m = 5; > PetscMPIInt rank, size; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > comm = PETSC_COMM_WORLD; > PetscCallMPI(MPI_Comm_rank(comm, &rank)); > PetscCallMPI(MPI_Comm_size(comm, &size)); > > PetscCall(MatCreate(comm, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); > PetscCall(MatSetFromOptions(A)); > PetscCall(MatSetUp(A)); > > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > PetscCallMPI(MPI_Comm_rank(comm, &rank)); > > if (rank == 0) > { > for (size_t j = 0; j < coo_i.size(); j++) > PetscCall(MatSetValues(A, > 1, &coo_i.at (j), > 1, &coo_j.at (j), > &coo_v.at (j), > ADD_VALUES)); > > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > } > > PetscCall(MatDestroy(&A)); > PetscCall(PetscFinalize()); > > return 0; > } > > When running with 1 process, it runs like a charm, but when running with more than one process, the code does not finish. Somehow it gets stuck forever. > > Em qua., 21 de jun. de 2023 ?s 14:12, Matthew Knepley > escreveu: >> On Wed, Jun 21, 2023 at 12:57?PM Diego Magela Lemos > wrote: >>> Unfortunately, I can modify ex2 to perform the matrix fill in using only one rank, although I have understood how it works. >>> Is it so hard to do that? >> >> It should not be. Maybe describe what is not clear? ex2 runs in parallel now. >> >> Thanks, >> >> Matt >> >>> Thank you. >>> >>> >>> Em qua., 21 de jun. de 2023 ?s 12:20, Mark Adams > escreveu: >>>> ex2 looks the same as the code at the beginning of the thread, which looks fine to me, yet fails. >>>> (the only thing I can think of is that &v.at (i) is not doing what one wants) >>>> >>>> Diego: I would start with this ex2.c, add your view statement, verify; incrementally change ex2 to your syntax and see where it breaks. >>>> >>>> Mark >>>> >>>> On Wed, Jun 21, 2023 at 9:50?AM Matthew Knepley > wrote: >>>>> On Wed, Jun 21, 2023 at 9:22?AM Diego Magela Lemos > wrote: >>>>>> Please, could you provide a minimal working example (or link) of how to do this? >>>>> >>>>> You can see here >>>>> >>>>> https://petsc.org/main/src/ksp/ksp/tutorials/ex2.c.html >>>>> >>>>> that each process only sets values for the rows it owns. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>>> Thank you. >>>>>> >>>>>> Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley > escreveu: >>>>>>> On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos > wrote: >>>>>>>> So... what do I need to do, please? >>>>>>>> Why am I getting wrong results when solving the linear system if the matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO? >>>>>>> >>>>>>> It appears that you have _all_ processes submit _all_ triples (i, j, v). Each triple can only be submitted by a single process. You can fix this in many ways. For example, an easy but suboptimal way is just to have process 0 submit them all, and all other processes submit nothing. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>>> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown > escreveu: >>>>>>>>> Matthew Knepley > writes: >>>>>>>>> >>>>>>>>> >> The matrix entries are multiplied by 2, that is, the number of processes >>>>>>>>> >> used to execute the code. >>>>>>>>> >> >>>>>>>>> > >>>>>>>>> > No. This was mostly intended for GPUs, where there is 1 process. If you >>>>>>>>> > want to use multiple MPI processes, then each process can only introduce >>>>>>>>> > some disjoint subset of the values. This is also how MatSetValues() works, >>>>>>>>> > but it might not be as obvious. >>>>>>>>> >>>>>>>>> They need not be disjoint, just sum to the expected values. This interface is very convenient for FE and FV methods. MatSetValues with ADD_VALUES has similar semantics without the intermediate storage, but it forces you to submit one element matrix at a time. Classic parallelism granularity versus memory use tradeoff with MatSetValuesCOO being a clear win on GPUs and more nuanced for CPUs. >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From diegomagela at usp.br Wed Jun 21 14:08:34 2023 From: diegomagela at usp.br (Diego Magela Lemos) Date: Wed, 21 Jun 2023 16:08:34 -0300 Subject: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix? In-Reply-To: References: <2124C01A-B0E6-4684-92E2-22B5653BE2DE@petsc.dev> <87edm6rp97.fsf@jedbrown.org> Message-ID: Finally! Moving these routines to outside the if statement solved the problem! Thank you! Em qua., 21 de jun. de 2023 ?s 15:23, Barry Smith escreveu: > > These routines are marked as collective in their manual pages and must > be called by all MPI processes that share the matrix A. > > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > > > > On Jun 21, 2023, at 2:16 PM, Diego Magela Lemos via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > So far, I've tried this: > > // fill_in_matrix.cc > > static char help[] = "Fill in a parallel COO format sparse matrix."; > > #include > #include > > int main(int argc, char **args) > { > MPI_Comm comm; > Mat A; > PetscInt m = 5; > PetscMPIInt rank, size; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > comm = PETSC_COMM_WORLD; > PetscCallMPI(MPI_Comm_rank(comm, &rank)); > PetscCallMPI(MPI_Comm_size(comm, &size)); > > PetscCall(MatCreate(comm, &A)); > PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m)); > PetscCall(MatSetFromOptions(A)); > PetscCall(MatSetUp(A)); > > std::vector coo_i{0, 0, 1, 2, 3, 4}; > std::vector coo_j{0, 0, 1, 2, 3, 4}; > std::vector coo_v{2, -1, 2, 3, 4, 5}; > > PetscCallMPI(MPI_Comm_rank(comm, &rank)); > > if (rank == 0) > { > for (size_t j = 0; j < coo_i.size(); j++) > PetscCall(MatSetValues(A, > 1, &coo_i.at(j), > 1, &coo_j.at(j), > &coo_v.at(j), > ADD_VALUES)); > > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD)); > } > > PetscCall(MatDestroy(&A)); > PetscCall(PetscFinalize()); > > return 0; > } > > When running with 1 process, it runs like a charm, but when running with > more than one process, the code does not finish. Somehow it gets stuck > forever. > > Em qua., 21 de jun. de 2023 ?s 14:12, Matthew Knepley > escreveu: > >> On Wed, Jun 21, 2023 at 12:57?PM Diego Magela Lemos >> wrote: >> >>> Unfortunately, I can modify ex2 to perform the matrix fill in using only >>> one rank, although I have understood how it works. >>> Is it so hard to do that? >>> >> >> It should not be. Maybe describe what is not clear? ex2 runs in parallel >> now. >> >> Thanks, >> >> Matt >> >> >>> Thank you. >>> >>> >>> Em qua., 21 de jun. de 2023 ?s 12:20, Mark Adams >>> escreveu: >>> >>>> ex2 looks the same as the code at the beginning of the thread, which >>>> looks fine to me, yet fails. >>>> (the only thing I can think of is that &v.at(i) is not doing what one >>>> wants) >>>> >>>> Diego: I would start with this ex2.c, add your view statement, verify; >>>> incrementally change ex2 to your syntax and see where it breaks. >>>> >>>> Mark >>>> >>>> On Wed, Jun 21, 2023 at 9:50?AM Matthew Knepley >>>> wrote: >>>> >>>>> On Wed, Jun 21, 2023 at 9:22?AM Diego Magela Lemos >>>>> wrote: >>>>> >>>>>> Please, could you provide a minimal working example (or link) of how >>>>>> to do this? >>>>>> >>>>> >>>>> You can see here >>>>> >>>>> https://petsc.org/main/src/ksp/ksp/tutorials/ex2.c.html >>>>> >>>>> that each process only sets values for the rows it owns. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thank you. >>>>>> >>>>>> Em ter., 20 de jun. de 2023 ?s 15:08, Matthew Knepley < >>>>>> knepley at gmail.com> escreveu: >>>>>> >>>>>>> On Tue, Jun 20, 2023 at 2:02?PM Diego Magela Lemos < >>>>>>> diegomagela at usp.br> wrote: >>>>>>> >>>>>>>> So... what do I need to do, please? >>>>>>>> Why am I getting wrong results when solving the linear system if >>>>>>>> the matrix is filled in with MatSetPreallocationCOO and >>>>>>>> MatSetValuesCOO? >>>>>>>> >>>>>>> >>>>>>> It appears that you have _all_ processes submit _all_ triples (i, j, >>>>>>> v). Each triple can only be submitted by a single process. You can fix this >>>>>>> in many ways. For example, an easy but suboptimal way is just to have >>>>>>> process 0 submit them all, and all other processes submit nothing. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Em ter., 20 de jun. de 2023 ?s 14:56, Jed Brown >>>>>>>> escreveu: >>>>>>>> >>>>>>>>> Matthew Knepley writes: >>>>>>>>> >>>>>>>>> >> The matrix entries are multiplied by 2, that is, the number of >>>>>>>>> processes >>>>>>>>> >> used to execute the code. >>>>>>>>> >> >>>>>>>>> > >>>>>>>>> > No. This was mostly intended for GPUs, where there is 1 process. >>>>>>>>> If you >>>>>>>>> > want to use multiple MPI processes, then each process can only >>>>>>>>> introduce >>>>>>>>> > some disjoint subset of the values. This is also how >>>>>>>>> MatSetValues() works, >>>>>>>>> > but it might not be as obvious. >>>>>>>>> >>>>>>>>> They need not be disjoint, just sum to the expected values. This >>>>>>>>> interface is very convenient for FE and FV methods. MatSetValues with >>>>>>>>> ADD_VALUES has similar semantics without the intermediate storage, but it >>>>>>>>> forces you to submit one element matrix at a time. Classic parallelism >>>>>>>>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear >>>>>>>>> win on GPUs and more nuanced for CPUs. >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Wed Jun 21 15:58:14 2023 From: liufield at gmail.com (neil liu) Date: Wed, 21 Jun 2023 16:58:14 -0400 Subject: [petsc-users] Inquiry about the c++ destructor and PetscFinalize. In-Reply-To: References: <53047c60-78b4-3c7f-5b62-927d9c47e294@alaska.edu> Message-ID: It works well for one processor; but when I tried two processors using mpiexec -n 2 ./ex1, there is an error shown as belows. If the line "DMDestroy(dmDist)" is commented out, the error will go away. This is a little confusing for me. [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind [1]PETSC ERROR: Object already free: Parameter # 1 [1]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.19.1, Apr 30, 2023 [1]PETSC ERROR: ./ex1 on a arch-linux-c-debug named kirin.remcominc.com by xiaodong.liu Wed Jun 21 16:54:46 2023 [1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack --download-ctetgen [1]PETSC ERROR: #1 DMDestroy() at /home/xiaodong.liu/Documents/petsc-3.19.1/src/dm/interface/dm.c:639 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind [0]PETSC ERROR: Object already free: Parameter # 1 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.19.1, Apr 30, 2023 [0]PETSC ERROR: ./ex1 on a arch-linux-c-debug named kirin.remcominc.com by xiaodong.liu Wed Jun 21 16:54:46 2023 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack --download-ctetgen [0]PETSC ERROR: #1 DMDestroy() at /home/xiaodong.liu/Documents/petsc-3.19.1/src/dm/interface/dm.c:639 On Tue, Jun 20, 2023 at 12:36?PM neil liu wrote: > Thanks a lot, Constantine. It works pretty well. > > > > On Fri, Jun 16, 2023 at 6:52?PM Constantine Khrulev > wrote: > >> In your code the destructor of DMManage is called at the end of scope, >> i.e. after the PetscFinalize() call. >> >> You should be able to avoid this error by putting "DMManage objDMManage" >> in a code block to limit its scope and ensure that it is destroyed >> before PETSc is finalized: >> >> int main(int argc, char** argv) { >> PetscFunctionBeginUser; >> PetscCall(PetscInitialize(&argc, &argv, NULL, help)); >> >> { >> DMManage objDMManage; >> } // objDMManage is destroyed here >> >> PetscFinalize(); >> return 0; >> } >> >> On 6/16/23 14:13, neil liu wrote: >> > Dear Petsc developers, >> > >> > I am trying to use Petsc with C++. And came across one issue. >> > Class DMManage has been defined, one default constructor and >> > destructor has been defined there. >> > The code has a runtime error, "double free or corruption". Finally I >> > found that, this is due to PetscFinalize. If I called explicitly the >> > destructor before this PetscFinalze, the error will disappear. >> > >> > Does that mean PetscFinalize do some work to destroy DM? >> > >> > Thanks, >> > >> > #include >> > #include >> > #include >> > #include >> > >> > class DMManage{ >> > PetscSF distributionSF; >> > public: >> > DM dm; >> > DMManage(); >> > ~DMManage(); >> > }; >> > >> > DMManage::DMManage(){ >> > const char filename[] = "ParallelWaveguide.msh"; >> > DM dmDist; >> > PetscViewer viewer; >> > PetscViewerCreate(PETSC_COMM_WORLD, &viewer); >> > PetscViewerSetType(viewer, PETSCVIEWERASCII); >> > PetscViewerFileSetMode(viewer, FILE_MODE_READ); >> > PetscViewerFileSetName(viewer, filename); >> > DMPlexCreateGmsh(PETSC_COMM_WORLD, viewer, PETSC_TRUE, &dm); >> > PetscViewerDestroy(&viewer); >> > PetscInt overlap = 0; >> > DMPlexDistribute(dm, overlap, &distributionSF, &dmDist); >> > std::cout<<&dm<> > if (dmDist) { >> > DMDestroy(&dm); >> > dm = dmDist; >> > } >> > DMDestroy(&dmDist); >> > } >> > >> > DMManage::~DMManage(){ >> > DMDestroy(&dm); >> > } >> > >> > int main(int argc, char** argv) { >> > PetscFunctionBeginUser; >> > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); >> > >> > DMManage objDMManage; >> > >> > PetscFinalize(); >> > return 0; >> > } >> >> -- >> Constantine >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.fai at gmail.com Wed Jun 21 16:12:33 2023 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Wed, 21 Jun 2023 17:12:33 -0400 Subject: [petsc-users] Inquiry about the c++ destructor and PetscFinalize. In-Reply-To: References: <53047c60-78b4-3c7f-5b62-927d9c47e294@alaska.edu> Message-ID: > If the line "DMDestroy(dmDist)" is commented out, the error will go away > > if (dmDist) { > > DMDestroy(&dm); > > dm = dmDist; > > } > > DMDestroy(&dmDist); This is because you are double-destroying dmDist here. Note that all petsc objects are pointers, so assignment may not do what you think it does. In this case, DM is a struct *_p_DM. So removing DMDestroy(dmDist) is correct. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) > On Jun 21, 2023, at 16:58, neil liu wrote: > > It works well for one processor; but when I tried two processors using mpiexec -n 2 ./ex1, > there is an error shown as belows. If the line "DMDestroy(dmDist)" is commented out, the error will go away. This is a little confusing for me. > > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [1]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind > [1]PETSC ERROR: Object already free: Parameter # 1 > [1]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.19.1, Apr 30, 2023 > [1]PETSC ERROR: ./ex1 on a arch-linux-c-debug named kirin.remcominc.com by xiaodong.liu Wed Jun 21 16:54:46 2023 > [1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack --download-ctetgen > [1]PETSC ERROR: #1 DMDestroy() at /home/xiaodong.liu/Documents/petsc-3.19.1/src/dm/interface/dm.c:639 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind > [0]PETSC ERROR: Object already free: Parameter # 1 > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.19.1, Apr 30, 2023 > [0]PETSC ERROR: ./ex1 on a arch-linux-c-debug named kirin.remcominc.com by xiaodong.liu Wed Jun 21 16:54:46 2023 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack --download-ctetgen > [0]PETSC ERROR: #1 DMDestroy() at /home/xiaodong.liu/Documents/petsc-3.19.1/src/dm/interface/dm.c:639 > > On Tue, Jun 20, 2023 at 12:36?PM neil liu wrote: > Thanks a lot, Constantine. It works pretty well. > > > > On Fri, Jun 16, 2023 at 6:52?PM Constantine Khrulev wrote: > In your code the destructor of DMManage is called at the end of scope, > i.e. after the PetscFinalize() call. > > You should be able to avoid this error by putting "DMManage objDMManage" > in a code block to limit its scope and ensure that it is destroyed > before PETSc is finalized: > > int main(int argc, char** argv) { > PetscFunctionBeginUser; > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); > > { > DMManage objDMManage; > } // objDMManage is destroyed here > > PetscFinalize(); > return 0; > } > > On 6/16/23 14:13, neil liu wrote: > > Dear Petsc developers, > > > > I am trying to use Petsc with C++. And came across one issue. > > Class DMManage has been defined, one default constructor and > > destructor has been defined there. > > The code has a runtime error, "double free or corruption". Finally I > > found that, this is due to PetscFinalize. If I called explicitly the > > destructor before this PetscFinalze, the error will disappear. > > > > Does that mean PetscFinalize do some work to destroy DM? > > > > Thanks, > > > > #include > > #include > > #include > > #include > > > > class DMManage{ > > PetscSF distributionSF; > > public: > > DM dm; > > DMManage(); > > ~DMManage(); > > }; > > > > DMManage::DMManage(){ > > const char filename[] = "ParallelWaveguide.msh"; > > DM dmDist; > > PetscViewer viewer; > > PetscViewerCreate(PETSC_COMM_WORLD, &viewer); > > PetscViewerSetType(viewer, PETSCVIEWERASCII); > > PetscViewerFileSetMode(viewer, FILE_MODE_READ); > > PetscViewerFileSetName(viewer, filename); > > DMPlexCreateGmsh(PETSC_COMM_WORLD, viewer, PETSC_TRUE, &dm); > > PetscViewerDestroy(&viewer); > > PetscInt overlap = 0; > > DMPlexDistribute(dm, overlap, &distributionSF, &dmDist); > > std::cout<<&dm< > if (dmDist) { > > DMDestroy(&dm); > > dm = dmDist; > > } > > DMDestroy(&dmDist); > > } > > > > DMManage::~DMManage(){ > > DMDestroy(&dm); > > } > > > > int main(int argc, char** argv) { > > PetscFunctionBeginUser; > > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); > > > > DMManage objDMManage; > > > > PetscFinalize(); > > return 0; > > } > > -- > Constantine > From liufield at gmail.com Wed Jun 21 16:27:23 2023 From: liufield at gmail.com (neil liu) Date: Wed, 21 Jun 2023 17:27:23 -0400 Subject: [petsc-users] Inquiry about the c++ destructor and PetscFinalize. In-Reply-To: References: <53047c60-78b4-3c7f-5b62-927d9c47e294@alaska.edu> Message-ID: Great, thanks a lot. On Wed, Jun 21, 2023 at 5:12?PM Jacob Faibussowitsch wrote: > > If the line "DMDestroy(dmDist)" is commented out, the error will go away > > > > if (dmDist) { > > > DMDestroy(&dm); > > > dm = dmDist; > > > } > > > DMDestroy(&dmDist); > > > This is because you are double-destroying dmDist here. Note that all petsc > objects are pointers, so assignment may not do what you think it does. In > this case, DM is a struct *_p_DM. > > So removing DMDestroy(dmDist) is correct. > > Best regards, > > Jacob Faibussowitsch > (Jacob Fai - booss - oh - vitch) > > > On Jun 21, 2023, at 16:58, neil liu wrote: > > > > It works well for one processor; but when I tried two processors using > mpiexec -n 2 ./ex1, > > there is an error shown as belows. If the line "DMDestroy(dmDist)" is > commented out, the error will go away. This is a little confusing for me. > > > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [1]PETSC ERROR: Corrupt argument: > https://petsc.org/release/faq/#valgrind > > [1]PETSC ERROR: Object already free: Parameter # 1 > > [1]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > > [1]PETSC ERROR: Petsc Release Version 3.19.1, Apr 30, 2023 > > [1]PETSC ERROR: ./ex1 on a arch-linux-c-debug named kirin.remcominc.com > by xiaodong.liu Wed Jun 21 16:54:46 2023 > > [1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-mpich --download-fblaslapack > --download-ctetgen > > [1]PETSC ERROR: #1 DMDestroy() at > /home/xiaodong.liu/Documents/petsc-3.19.1/src/dm/interface/dm.c:639 > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [0]PETSC ERROR: Corrupt argument: > https://petsc.org/release/faq/#valgrind > > [0]PETSC ERROR: Object already free: Parameter # 1 > > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.19.1, Apr 30, 2023 > > [0]PETSC ERROR: ./ex1 on a arch-linux-c-debug named kirin.remcominc.com > by xiaodong.liu Wed Jun 21 16:54:46 2023 > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-mpich --download-fblaslapack > --download-ctetgen > > [0]PETSC ERROR: #1 DMDestroy() at > /home/xiaodong.liu/Documents/petsc-3.19.1/src/dm/interface/dm.c:639 > > > > On Tue, Jun 20, 2023 at 12:36?PM neil liu wrote: > > Thanks a lot, Constantine. It works pretty well. > > > > > > > > On Fri, Jun 16, 2023 at 6:52?PM Constantine Khrulev < > ckhroulev at alaska.edu> wrote: > > In your code the destructor of DMManage is called at the end of scope, > > i.e. after the PetscFinalize() call. > > > > You should be able to avoid this error by putting "DMManage objDMManage" > > in a code block to limit its scope and ensure that it is destroyed > > before PETSc is finalized: > > > > int main(int argc, char** argv) { > > PetscFunctionBeginUser; > > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); > > > > { > > DMManage objDMManage; > > } // objDMManage is destroyed here > > > > PetscFinalize(); > > return 0; > > } > > > > On 6/16/23 14:13, neil liu wrote: > > > Dear Petsc developers, > > > > > > I am trying to use Petsc with C++. And came across one issue. > > > Class DMManage has been defined, one default constructor and > > > destructor has been defined there. > > > The code has a runtime error, "double free or corruption". Finally I > > > found that, this is due to PetscFinalize. If I called explicitly the > > > destructor before this PetscFinalze, the error will disappear. > > > > > > Does that mean PetscFinalize do some work to destroy DM? > > > > > > Thanks, > > > > > > #include > > > #include > > > #include > > > #include > > > > > > class DMManage{ > > > PetscSF distributionSF; > > > public: > > > DM dm; > > > DMManage(); > > > ~DMManage(); > > > }; > > > > > > DMManage::DMManage(){ > > > const char filename[] = "ParallelWaveguide.msh"; > > > DM dmDist; > > > PetscViewer viewer; > > > PetscViewerCreate(PETSC_COMM_WORLD, &viewer); > > > PetscViewerSetType(viewer, PETSCVIEWERASCII); > > > PetscViewerFileSetMode(viewer, FILE_MODE_READ); > > > PetscViewerFileSetName(viewer, filename); > > > DMPlexCreateGmsh(PETSC_COMM_WORLD, viewer, PETSC_TRUE, &dm); > > > PetscViewerDestroy(&viewer); > > > PetscInt overlap = 0; > > > DMPlexDistribute(dm, overlap, &distributionSF, &dmDist); > > > std::cout<<&dm< > > if (dmDist) { > > > DMDestroy(&dm); > > > dm = dmDist; > > > } > > > DMDestroy(&dmDist); > > > } > > > > > > DMManage::~DMManage(){ > > > DMDestroy(&dm); > > > } > > > > > > int main(int argc, char** argv) { > > > PetscFunctionBeginUser; > > > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); > > > > > > DMManage objDMManage; > > > > > > PetscFinalize(); > > > return 0; > > > } > > > > -- > > Constantine > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.chenjiahong at foxmail.com Wed Jun 21 22:49:54 2023 From: gael.chenjiahong at foxmail.com (=?ISO-8859-1?B?R2FlbA==?=) Date: Thu, 22 Jun 2023 11:49:54 +0800 Subject: [petsc-users] petsc configure problem Message-ID: To whom it may concern, I'm trying to install PETSc on my HPC, and in the configure step something keeps going wrong with the following message: PETSc requires c99 compiler! Configure could not determine compatible compiler flag. Perhaps you can specify via CFLAGS We have googled it and try to add CFLAGS spefified in this post Re: [petsc-users] PETSc on GCC (mail-archive.com). But none could work and the error message changed as: C compiler you provided with -with-cc=mpicc cannot be found or does not work. Cannot compile C with mpicc. My log file is attached. Could you please give some suggestions on this problem? Thank you, Jiahong CHEN -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 317429 bytes Desc: not available URL: From balay at mcs.anl.gov Wed Jun 21 23:15:29 2023 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 21 Jun 2023 23:15:29 -0500 (CDT) Subject: [petsc-users] petsc configure problem In-Reply-To: References: Message-ID: <402cbb7c-282d-e8b8-4356-ade9db1ab51f@mcs.anl.gov> > --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpi PETSC_ARCH=intel_2020 > Executing: mpicc --version > stdout: > gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) Did you intend to use Intel compilers? >>>>> /tmp/petsc-sMhvLh/config.setCompilers/conftest.c: In function ?main?: /tmp/petsc-sMhvLh/config.setCompilers/conftest.c:8:9: error: ?FLT_ROUNDS? undeclared (first use in this function) y = FLT_ROUNDS; ^~~~~~~~~~ <<<<< Perhaps the intel compiler setup in env is causing gcc to misbehave? i.e if you are attempting an install with intel compilers - use 'mpiicc, mpiicpc, mpiifort' if you are attempting an install with gcc - avoid the intel compiler setup in env Also currently supported release in petsc-3.19 - we suggest using it instead of 3.14 Satish On Thu, 22 Jun 2023, Gael wrote: > To whom it may concern, > > > I'm trying to install PETSc on my HPC, and in the configure step something keeps going wrong with the following message: > > > PETSc requires c99 compiler! Configure could not determine compatible compiler flag. Perhaps you can specify via CFLAGS > > > We have googled it and try to add CFLAGS spefified in this post Re: [petsc-users] PETSc on GCC (mail-archive.com). But none could work and the error message changed as: > > > C compiler you provided with -with-cc=mpicc cannot be found or does not work. Cannot compile C with mpicc. > > > > My log file is attached. Could you please give some suggestions on this problem? > > > Thank you, > Jiahong CHEN From gael.chenjiahong at foxmail.com Thu Jun 22 02:29:32 2023 From: gael.chenjiahong at foxmail.com (=?gb18030?B?R2FlbA==?=) Date: Thu, 22 Jun 2023 15:29:32 +0800 Subject: [petsc-users] petsc configure problem In-Reply-To: <402cbb7c-282d-e8b8-4356-ade9db1ab51f@mcs.anl.gov> References: <402cbb7c-282d-e8b8-4356-ade9db1ab51f@mcs.anl.gov> Message-ID: Thanks for your help! Using 'mpiicc, mpiicpc, mpiifort' works well! And yes, intel compiler is used intentionally for my installation. Jiahong CHEN ------------------ Original ------------------ From: "petsc-users" From niko.karin at gmail.com Thu Jun 22 12:54:15 2023 From: niko.karin at gmail.com (Karin&NiKo) Date: Thu, 22 Jun 2023 19:54:15 +0200 Subject: [petsc-users] snes_type aspin without DA example Message-ID: Dear PETSc team, I would like to play with aspin-type nonlinear solvers. I have found several tests like snes/tutorials/ex19.c but they all use DA, which I don't want to use since I need to stick at the algebraic level. Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up things. Unfortunately, I get the error "DM has no default decomposition defined. Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do understand. But I do not find any implementation of the SNESNASMSetSubdomains in petsc4py. Am I missing something ? Thanks, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jun 22 13:41:13 2023 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 22 Jun 2023 14:41:13 -0400 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: References: Message-ID: You are not missing anything. The petsc4py stub for SNESNASMSetSubdomains() has not been written. You could add it by adding to src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and then make a merge request https://petsc.org/release/developers/contributing/ to get it into PETSc. > On Jun 22, 2023, at 1:54 PM, Karin&NiKo wrote: > > Dear PETSc team, > > I would like to play with aspin-type nonlinear solvers. I have found several tests like snes/tutorials/ex19.c but they all use DA, which I don't want to use since I need to stick at the algebraic level. > Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up things. > Unfortunately, I get the error "DM has no default decomposition defined. Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do understand. > But I do not find any implementation of the SNESNASMSetSubdomains in petsc4py. > Am I missing something ? > Thanks, > Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko.karin at gmail.com Thu Jun 22 14:20:43 2023 From: niko.karin at gmail.com (Karin&NiKo) Date: Thu, 22 Jun 2023 21:20:43 +0200 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: References: Message-ID: Thank you Barry. I will try this. Should I provide a test in src/binding/petsc4py/test/test_snes.py ? Le jeu. 22 juin 2023 ? 20:41, Barry Smith a ?crit : > > You are not missing anything. The petsc4py stub for > SNESNASMSetSubdomains() has not been written. You could add it by adding to > > src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and then > make a merge request https://petsc.org/release/developers/contributing/ to get > it into PETSc. > > On Jun 22, 2023, at 1:54 PM, Karin&NiKo wrote: > > Dear PETSc team, > > I would like to play with aspin-type nonlinear solvers. I have found > several tests like snes/tutorials/ex19.c but they all use DA, which I don't > want to use since I need to stick at the algebraic level. > Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up > things. > Unfortunately, I get the error "DM has no default decomposition defined. > Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do > understand. > But I do not find any implementation of the SNESNASMSetSubdomains in > petsc4py. > Am I missing something ? > Thanks, > Nicolas > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jun 22 19:17:01 2023 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 22 Jun 2023 20:17:01 -0400 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: References: Message-ID: <31717DEF-0CEE-4721-9655-2B9447AD4283@petsc.dev> A test would be great. > On Jun 22, 2023, at 3:20 PM, Karin&NiKo wrote: > > Thank you Barry. I will try this. > Should I provide a test in src/binding/petsc4py/test/test_snes.py ? > > Le jeu. 22 juin 2023 ? 20:41, Barry Smith > a ?crit : >> >> You are not missing anything. The petsc4py stub for SNESNASMSetSubdomains() has not been written. You could add it by adding to >> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and then make a merge request https://petsc.org/release/developers/contributing/ to get it into PETSc. >> >>> On Jun 22, 2023, at 1:54 PM, Karin&NiKo > wrote: >>> >>> Dear PETSc team, >>> >>> I would like to play with aspin-type nonlinear solvers. I have found several tests like snes/tutorials/ex19.c but they all use DA, which I don't want to use since I need to stick at the algebraic level. >>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up things. >>> Unfortunately, I get the error "DM has no default decomposition defined. Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do understand. >>> But I do not find any implementation of the SNESNASMSetSubdomains in petsc4py. >>> Am I missing something ? >>> Thanks, >>> Nicolas >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Thu Jun 22 19:37:09 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Thu, 22 Jun 2023 17:37:09 -0700 Subject: [petsc-users] hypre-ILU vs hypre Euclid Message-ID: I know that PETSc has hooks for Euclid but I discovered today that it does not support 64 bit indices, which many MOOSE applications need. This would probably be more appropriate for a hypre support forum (does anyone know if such a forum exists other than opening GitHub issues?), but does anyone here know what the difference between hypre-ILU and hypre-Euclid are? From the docs it seems they are both supposed to be parallel ILU solvers. If hypre-ILU worked with 64 bit indices (I can probably check this sifting through the sources), then I would probably add hooks for it in PETSc (AFAICT those don't exist at present). -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jun 22 20:49:07 2023 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Jun 2023 21:49:07 -0400 Subject: [petsc-users] hypre-ILU vs hypre Euclid In-Reply-To: References: Message-ID: On Thu, Jun 22, 2023 at 8:37?PM Alexander Lindsay wrote: > I know that PETSc has hooks for Euclid but I discovered today that it does > not support 64 bit indices, which many MOOSE applications need. This would > probably be more appropriate for a hypre support forum (does anyone know if > such a forum exists other than opening GitHub issues?), but does anyone > here know what the difference between hypre-ILU and hypre-Euclid are? From > the docs it seems they are both supposed to be parallel ILU solvers. > > If hypre-ILU worked with 64 bit indices (I can probably check this sifting > through the sources), then I would probably add hooks for it in PETSc > (AFAICT those don't exist at present). > My understanding was that two different people were working on them. I do not know if either is still actively supported. We would of course like a binding to whatever is supported. Are you sure you want to run ILU? THanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Thu Jun 22 22:00:05 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Thu, 22 Jun 2023 20:00:05 -0700 Subject: [petsc-users] hypre-ILU vs hypre Euclid In-Reply-To: References: Message-ID: <3A3BBF1D-BF8F-47A6-82EC-F2157D71C467@gmail.com> An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Jun 22 23:11:24 2023 From: jed at jedbrown.org (Jed Brown) Date: Thu, 22 Jun 2023 22:11:24 -0600 Subject: [petsc-users] hypre-ILU vs hypre Euclid In-Reply-To: <3A3BBF1D-BF8F-47A6-82EC-F2157D71C467@gmail.com> References: <3A3BBF1D-BF8F-47A6-82EC-F2157D71C467@gmail.com> Message-ID: <87lega4y37.fsf@jedbrown.org> It looks like Victor is working on hypre-ILU so it is active. PETSc used to have PILUT support, but it was so buggy/leaky that we removed the interface. Alexander Lindsay writes: > Haha no I am not sure. There are a few other preconditioning options I will explore before knocking on this door some more. > > On Jun 22, 2023, at 6:49 PM, Matthew Knepley wrote: > > ?On Thu, Jun 22, 2023 at 8:37?PM Alexander Lindsay wrote: > > I know that PETSc has hooks for Euclid but I discovered today that it does not support 64 bit indices, which many MOOSE > applications need. This would probably be more appropriate for a hypre support forum (does anyone know if such a forum > exists other than opening GitHub issues?), but does anyone here know what the difference between hypre-ILU and > hypre-Euclid are? From the docs it seems they are both supposed to be parallel ILU solvers. > > If hypre-ILU worked with 64 bit indices (I can probably check this sifting through the sources), then I would probably add > hooks for it in PETSc (AFAICT those don't exist at present). > > My understanding was that two different people were working on them. I do not know if either is still actively supported. We > would of course like a binding to whatever is supported. > > Are you sure you want to run ILU? > > THanks, > > Matt > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to > which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From niko.karin at gmail.com Fri Jun 23 02:40:10 2023 From: niko.karin at gmail.com (Karin&NiKo) Date: Fri, 23 Jun 2023 09:40:10 +0200 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: <31717DEF-0CEE-4721-9655-2B9447AD4283@petsc.dev> References: <31717DEF-0CEE-4721-9655-2B9447AD4283@petsc.dev> Message-ID: Dear Barry, I have started looking at the code but I miss an example using SNESNASMSetSubdomains. In fact I do not even find a single use of the function in PETSc. Could someone provide me with an example ? Thanks, Nicolas Le ven. 23 juin 2023 ? 02:17, Barry Smith a ?crit : > > A test would be great. > > On Jun 22, 2023, at 3:20 PM, Karin&NiKo wrote: > > Thank you Barry. I will try this. > Should I provide a test in src/binding/petsc4py/test/test_snes.py ? > > Le jeu. 22 juin 2023 ? 20:41, Barry Smith a ?crit : > >> >> You are not missing anything. The petsc4py stub for >> SNESNASMSetSubdomains() has not been written. You could add it by adding to >> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and >> then make a merge request >> https://petsc.org/release/developers/contributing/ to get it into PETSc. >> >> On Jun 22, 2023, at 1:54 PM, Karin&NiKo wrote: >> >> Dear PETSc team, >> >> I would like to play with aspin-type nonlinear solvers. I have found >> several tests like snes/tutorials/ex19.c but they all use DA, which I don't >> want to use since I need to stick at the algebraic level. >> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up >> things. >> Unfortunately, I get the error "DM has no default decomposition defined. >> Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do >> understand. >> But I do not find any implementation of the SNESNASMSetSubdomains in >> petsc4py. >> Am I missing something ? >> Thanks, >> Nicolas >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko.karin at gmail.com Fri Jun 23 05:12:50 2023 From: niko.karin at gmail.com (Karin&NiKo) Date: Fri, 23 Jun 2023 12:12:50 +0200 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: References: <31717DEF-0CEE-4721-9655-2B9447AD4283@petsc.dev> Message-ID: In order to transfer from Python to C a list of int, real or bool as an input, there are the functions iarray_i, iarray_r and iarray_b. In order to transfer from C to Python a list of int, real, bool or pointers as an output, there are the functions oarray_i, oarray_r, oarray_b and oarray_p. Nevertheless I do not find the function iarray_p which (I think) is required to transfer a list of Scatter when calling SNESNASMSetSubdomains. Am I right ? Le ven. 23 juin 2023 ? 09:40, Karin&NiKo a ?crit : > Dear Barry, > I have started looking at the code but I miss an example using > SNESNASMSetSubdomains. In fact I do not even find a single use of the > function in PETSc. > Could someone provide me with an example ? > Thanks, > Nicolas > > Le ven. 23 juin 2023 ? 02:17, Barry Smith a ?crit : > >> >> A test would be great. >> >> On Jun 22, 2023, at 3:20 PM, Karin&NiKo wrote: >> >> Thank you Barry. I will try this. >> Should I provide a test in src/binding/petsc4py/test/test_snes.py ? >> >> Le jeu. 22 juin 2023 ? 20:41, Barry Smith a ?crit : >> >>> >>> You are not missing anything. The petsc4py stub for >>> SNESNASMSetSubdomains() has not been written. You could add it by adding to >>> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and >>> then make a merge request >>> https://petsc.org/release/developers/contributing/ to get it into PETSc. >>> >>> On Jun 22, 2023, at 1:54 PM, Karin&NiKo wrote: >>> >>> Dear PETSc team, >>> >>> I would like to play with aspin-type nonlinear solvers. I have found >>> several tests like snes/tutorials/ex19.c but they all use DA, which I don't >>> want to use since I need to stick at the algebraic level. >>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up >>> things. >>> Unfortunately, I get the error "DM has no default decomposition >>> defined. Set subsolves manually with SNESNASMSetSubdomains()" which, I >>> think, I do understand. >>> But I do not find any implementation of the SNESNASMSetSubdomains in >>> petsc4py. >>> Am I missing something ? >>> Thanks, >>> Nicolas >>> >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Jun 23 05:35:34 2023 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 23 Jun 2023 06:35:34 -0400 Subject: [petsc-users] hypre-ILU vs hypre Euclid In-Reply-To: <87lega4y37.fsf@jedbrown.org> References: <3A3BBF1D-BF8F-47A6-82EC-F2157D71C467@gmail.com> <87lega4y37.fsf@jedbrown.org> Message-ID: Alexander, let me add that Ilu is pretty commodity, and is available with our vendor, back ends, and that is probably the more reliable route. Hyper?s AMG Solver is state of the art, but their ilu is not their focus. Mark. On Fri, Jun 23, 2023 at 12:11 AM Jed Brown wrote: > It looks like Victor is working on hypre-ILU so it is active. PETSc used > to have PILUT support, but it was so buggy/leaky that we removed the > interface. > > Alexander Lindsay writes: > > > Haha no I am not sure. There are a few other preconditioning options I > will explore before knocking on this door some more. > > > > On Jun 22, 2023, at 6:49 PM, Matthew Knepley wrote: > > > > ?On Thu, Jun 22, 2023 at 8:37?PM Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > > > > I know that PETSc has hooks for Euclid but I discovered today that it > does not support 64 bit indices, which many MOOSE > > applications need. This would probably be more appropriate for a hypre > support forum (does anyone know if such a forum > > exists other than opening GitHub issues?), but does anyone here know > what the difference between hypre-ILU and > > hypre-Euclid are? From the docs it seems they are both supposed to be > parallel ILU solvers. > > > > If hypre-ILU worked with 64 bit indices (I can probably check this > sifting through the sources), then I would probably add > > hooks for it in PETSc (AFAICT those don't exist at present). > > > > My understanding was that two different people were working on them. I > do not know if either is still actively supported. We > > would of course like a binding to whatever is supported. > > > > Are you sure you want to run ILU? > > > > THanks, > > > > Matt > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to > > which their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Fri Jun 23 09:28:05 2023 From: liufield at gmail.com (neil liu) Date: Fri, 23 Jun 2023 10:28:05 -0400 Subject: [petsc-users] Inquiry about PetscDTSimplexQuadrature . Message-ID: Dear Petsc developers, I am learning *PetscDTSimplexQuadrature *and plan to use it. I found that, in the biunit simplex (tetra), (-1,-1,-1),(1,-1,-1),(-1,1,-1),(-1,-1,1), degree 1: npoints 4, the sum of weights = 4/3(the volume of this simplex) degree 2 : npoints 8; For my previous experience, I used Gauss quadrature rules, (npoints =4 , 5, 11, 15). Then I am curious what rule is Petsc using ? And is *PetscDTSimplexQuadrature *used by PetscFE? Thanks a lot, Xiaodong -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 23 09:33:43 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 23 Jun 2023 10:33:43 -0400 Subject: [petsc-users] Inquiry about PetscDTSimplexQuadrature . In-Reply-To: References: Message-ID: On Fri, Jun 23, 2023 at 10:28?AM neil liu wrote: > Dear Petsc developers, > > I am learning *PetscDTSimplexQuadrature *and plan to use it. > I found that, in the biunit simplex (tetra), > (-1,-1,-1),(1,-1,-1),(-1,1,-1),(-1,-1,1), > degree 1: npoints 4, the sum of weights = 4/3(the volume of this simplex) > degree 2 : npoints 8; > For my previous experience, I used Gauss quadrature rules, (npoints =4 , > 5, 11, 15). > Then I am curious what rule is Petsc using ? > There are two supported types: - Stroud Conical Quadrature - Minimal Symmetric Quadrature I think we prefer symmetric when it is available. > And is *PetscDTSimplexQuadrature *used by PetscFE? > Yes. Thanks, Matt > > Thanks a lot, > > Xiaodong > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Fri Jun 23 10:36:27 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Fri, 23 Jun 2023 08:36:27 -0700 Subject: [petsc-users] hypre-ILU vs hypre Euclid In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 23 11:51:48 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 23 Jun 2023 12:51:48 -0400 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: References: <31717DEF-0CEE-4721-9655-2B9447AD4283@petsc.dev> Message-ID: Look at SNESSetUp_NASM() src/snes/nasm/nasm.c and see the if (dm) case. It calls DMCreateDomainDecomposition() to get subdms and then calls DMCreateDomainDecompositionScatters() to get the scatters that would be passed to SNESNASMSetSubdomains(). Then it builds the subsnes. These are the ingredients needed to construct the objects passed to SNESNASMSetSubdomains(). One could argue that this code fragment would be better if it did not interact directly with the internals of &nasm->oscatter etc but instead created the objects and then (in the routine) called SNESNASMSetSubdomains() but anyways it is how it ended up. Barry > On Jun 23, 2023, at 3:40 AM, Karin&NiKo wrote: > > Dear Barry, > I have started looking at the code but I miss an example using SNESNASMSetSubdomains. In fact I do not even find a single use of the function in PETSc. > Could someone provide me with an example ? > Thanks, > Nicolas > > Le ven. 23 juin 2023 ? 02:17, Barry Smith > a ?crit : >> >> A test would be great. >> >>> On Jun 22, 2023, at 3:20 PM, Karin&NiKo > wrote: >>> >>> Thank you Barry. I will try this. >>> Should I provide a test in src/binding/petsc4py/test/test_snes.py ? >>> >>> Le jeu. 22 juin 2023 ? 20:41, Barry Smith > a ?crit : >>>> >>>> You are not missing anything. The petsc4py stub for SNESNASMSetSubdomains() has not been written. You could add it by adding to >>>> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and then make a merge request https://petsc.org/release/developers/contributing/ to get it into PETSc. >>>> >>>>> On Jun 22, 2023, at 1:54 PM, Karin&NiKo > wrote: >>>>> >>>>> Dear PETSc team, >>>>> >>>>> I would like to play with aspin-type nonlinear solvers. I have found several tests like snes/tutorials/ex19.c but they all use DA, which I don't want to use since I need to stick at the algebraic level. >>>>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up things. >>>>> Unfortunately, I get the error "DM has no default decomposition defined. Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do understand. >>>>> But I do not find any implementation of the SNESNASMSetSubdomains in petsc4py. >>>>> Am I missing something ? >>>>> Thanks, >>>>> Nicolas >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 23 11:57:05 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 23 Jun 2023 12:57:05 -0400 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: References: <31717DEF-0CEE-4721-9655-2B9447AD4283@petsc.dev> Message-ID: Take a look at setNestSubVecs() in /src/binding/petsc4py/src/petsc4py/PETSc/Vec.pyx it seems to deal with passing an array of PETSc objects. Likely there are additional instances where arrays of objects are passed between Python and C. I do not understand this code, but the Python experts will. Perhaps this type of construct can be hoisted up into a petsc4py implementation utility useful for any place in PETSc where arrays of objects must be passed back and forth. Barry > On Jun 23, 2023, at 6:12 AM, Karin&NiKo wrote: > > In order to transfer from Python to C a list of int, real or bool as an input, there are the functions iarray_i, iarray_r and iarray_b. > In order to transfer from C to Python a list of int, real, bool or pointers as an output, there are the functions oarray_i, oarray_r, oarray_b and oarray_p. > Nevertheless I do not find the function iarray_p which (I think) is required to transfer a list of Scatter when calling SNESNASMSetSubdomains. > Am I right ? > > Le ven. 23 juin 2023 ? 09:40, Karin&NiKo > a ?crit : >> Dear Barry, >> I have started looking at the code but I miss an example using SNESNASMSetSubdomains. In fact I do not even find a single use of the function in PETSc. >> Could someone provide me with an example ? >> Thanks, >> Nicolas >> >> Le ven. 23 juin 2023 ? 02:17, Barry Smith > a ?crit : >>> >>> A test would be great. >>> >>>> On Jun 22, 2023, at 3:20 PM, Karin&NiKo > wrote: >>>> >>>> Thank you Barry. I will try this. >>>> Should I provide a test in src/binding/petsc4py/test/test_snes.py ? >>>> >>>> Le jeu. 22 juin 2023 ? 20:41, Barry Smith > a ?crit : >>>>> >>>>> You are not missing anything. The petsc4py stub for SNESNASMSetSubdomains() has not been written. You could add it by adding to >>>>> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and then make a merge request https://petsc.org/release/developers/contributing/ to get it into PETSc. >>>>> >>>>>> On Jun 22, 2023, at 1:54 PM, Karin&NiKo > wrote: >>>>>> >>>>>> Dear PETSc team, >>>>>> >>>>>> I would like to play with aspin-type nonlinear solvers. I have found several tests like snes/tutorials/ex19.c but they all use DA, which I don't want to use since I need to stick at the algebraic level. >>>>>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up things. >>>>>> Unfortunately, I get the error "DM has no default decomposition defined. Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do understand. >>>>>> But I do not find any implementation of the SNESNASMSetSubdomains in petsc4py. >>>>>> Am I missing something ? >>>>>> Thanks, >>>>>> Nicolas >>>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko.karin at gmail.com Fri Jun 23 12:00:55 2023 From: niko.karin at gmail.com (Karin&NiKo) Date: Fri, 23 Jun 2023 19:00:55 +0200 Subject: [petsc-users] snes_type aspin without DA example In-Reply-To: References: <31717DEF-0CEE-4721-9655-2B9447AD4283@petsc.dev> Message-ID: Thanks Barry, I'll check it out. Le ven. 23 juin 2023 ? 18:57, Barry Smith a ?crit : > > Take a look at setNestSubVecs() > in /src/binding/petsc4py/src/petsc4py/PETSc/Vec.pyx it seems to deal with > passing an array of PETSc objects. Likely there are additional instances > where arrays of objects are passed between Python and C. > > I do not understand this code, but the Python experts will. Perhaps this > type of construct can be hoisted up into a petsc4py implementation utility > useful for any place in PETSc where arrays of objects must be passed back > and forth. > > Barry > > > On Jun 23, 2023, at 6:12 AM, Karin&NiKo wrote: > > In order to transfer from Python to C a list of int, real or bool as an > input, there are the functions iarray_i, iarray_r and iarray_b. > In order to transfer from C to Python a list of int, real, bool or > pointers as an output, there are the functions oarray_i, oarray_r, oarray_b > and oarray_p. > Nevertheless I do not find the function iarray_p which (I think) is > required to transfer a list of Scatter when calling SNESNASMSetSubdomains. > Am I right ? > > Le ven. 23 juin 2023 ? 09:40, Karin&NiKo a ?crit : > >> Dear Barry, >> I have started looking at the code but I miss an example using >> SNESNASMSetSubdomains. In fact I do not even find a single use of the >> function in PETSc. >> Could someone provide me with an example ? >> Thanks, >> Nicolas >> >> Le ven. 23 juin 2023 ? 02:17, Barry Smith a ?crit : >> >>> >>> A test would be great. >>> >>> On Jun 22, 2023, at 3:20 PM, Karin&NiKo wrote: >>> >>> Thank you Barry. I will try this. >>> Should I provide a test in src/binding/petsc4py/test/test_snes.py ? >>> >>> Le jeu. 22 juin 2023 ? 20:41, Barry Smith a ?crit : >>> >>>> >>>> You are not missing anything. The petsc4py stub for >>>> SNESNASMSetSubdomains() has not been written. You could add it by adding to >>>> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and >>>> then make a merge request >>>> https://petsc.org/release/developers/contributing/ to get it into >>>> PETSc. >>>> >>>> On Jun 22, 2023, at 1:54 PM, Karin&NiKo wrote: >>>> >>>> Dear PETSc team, >>>> >>>> I would like to play with aspin-type nonlinear solvers. I have found >>>> several tests like snes/tutorials/ex19.c but they all use DA, which I don't >>>> want to use since I need to stick at the algebraic level. >>>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set >>>> up things. >>>> Unfortunately, I get the error "DM has no default decomposition >>>> defined. Set subsolves manually with SNESNASMSetSubdomains()" which, I >>>> think, I do understand. >>>> But I do not find any implementation of the SNESNASMSetSubdomains in >>>> petsc4py. >>>> Am I missing something ? >>>> Thanks, >>>> Nicolas >>>> >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Fri Jun 23 12:18:07 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Fri, 23 Jun 2023 10:18:07 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: <875y7ymzc2.fsf@jedbrown.org> References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> Message-ID: Hi Jed, I will come back with answers to all of your questions at some point. I mostly just deal with MOOSE users who come to me and tell me their solve is converging slowly, asking me how to fix it. So I generally assume they have built an appropriate mesh and problem size for the problem they want to solve and added appropriate turbulence modeling (although my general assumption is often violated). > And to confirm, are you doing a nonlinearly implicit velocity-pressure solve? Yes, this is our default. A general question: it seems that it is well known that the quality of selfp degrades with increasing advection. Why is that? On Wed, Jun 7, 2023 at 8:01?PM Jed Brown wrote: > Alexander Lindsay writes: > > > This has been a great discussion to follow. Regarding > > > >> when time stepping, you have enough mass matrix that cheaper > preconditioners are good enough > > > > I'm curious what some algebraic recommendations might be for high Re in > > transients. > > What mesh aspect ratio and streamline CFL number? Assuming your model is > turbulent, can you say anything about momentum thickness Reynolds number > Re_?? What is your wall normal spacing in plus units? (Wall resolved or > wall modeled?) > > And to confirm, are you doing a nonlinearly implicit velocity-pressure > solve? > > > I've found one-level DD to be ineffective when applied monolithically or > to the momentum block of a split, as it scales with the mesh size. > > I wouldn't put too much weight on "scaling with mesh size" per se. You > want an efficient solver for the coarsest mesh that delivers sufficient > accuracy in your flow regime. Constants matter. > > Refining the mesh while holding time steps constant changes the advective > CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful > scaling study is to increase Reynolds number (e.g., by growing the domain) > while keeping mesh size matched in terms of plus units in the viscous > sublayer and Kolmogorov length in the outer boundary layer. That turns out > to not be a very automatic study to do, but it's what matters and you can > spend a lot of time chasing ghosts with naive scaling studies. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Fri Jun 23 12:23:34 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Fri, 23 Jun 2023 10:23:34 -0700 Subject: [petsc-users] hypre-ILU vs hypre Euclid In-Reply-To: References: Message-ID: Based on https://github.com/hypre-space/hypre/issues/937 it sounds like hypre-ILU is under active development and should be the one we focus on bindings for. It does support 64 bit indices and GPU On Fri, Jun 23, 2023 at 8:36?AM Alexander Lindsay wrote: > Thanks all for your replies. Mark, I?m a little unclear on what you said. > My understanding is that PETSc ILU is serial only (or can be used as the > sub PC in DD PCs). > > On Jun 23, 2023, at 3:35 AM, Mark Adams wrote: > > ? > Alexander, let me add that Ilu is pretty commodity, and is available with > our vendor, back ends, and that is probably the more reliable route. > Hyper?s AMG Solver is state of the art, but their ilu is not their focus. > > Mark. > > On Fri, Jun 23, 2023 at 12:11 AM Jed Brown wrote: > >> It looks like Victor is working on hypre-ILU so it is active. PETSc used >> to have PILUT support, but it was so buggy/leaky that we removed the >> interface. >> >> Alexander Lindsay writes: >> >> > Haha no I am not sure. There are a few other preconditioning options I >> will explore before knocking on this door some more. >> > >> > On Jun 22, 2023, at 6:49 PM, Matthew Knepley >> wrote: >> > >> > ?On Thu, Jun 22, 2023 at 8:37?PM Alexander Lindsay < >> alexlindsay239 at gmail.com> wrote: >> > >> > I know that PETSc has hooks for Euclid but I discovered today that it >> does not support 64 bit indices, which many MOOSE >> > applications need. This would probably be more appropriate for a hypre >> support forum (does anyone know if such a forum >> > exists other than opening GitHub issues?), but does anyone here know >> what the difference between hypre-ILU and >> > hypre-Euclid are? From the docs it seems they are both supposed to be >> parallel ILU solvers. >> > >> > If hypre-ILU worked with 64 bit indices (I can probably check this >> sifting through the sources), then I would probably add >> > hooks for it in PETSc (AFAICT those don't exist at present). >> > >> > My understanding was that two different people were working on them. I >> do not know if either is still actively supported. We >> > would of course like a binding to whatever is supported. >> > >> > Are you sure you want to run ILU? >> > >> > THanks, >> > >> > Matt >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> > which their experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Fri Jun 23 14:02:52 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Fri, 23 Jun 2023 12:02:52 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> Message-ID: I guess it is because the inverse of the diagonal form of A00 becomes a poor representation of the inverse of A00? I guess naively I would have thought that the blockdiag form of A00 is A00 On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay wrote: > Hi Jed, I will come back with answers to all of your questions at some > point. I mostly just deal with MOOSE users who come to me and tell me their > solve is converging slowly, asking me how to fix it. So I generally assume > they have built an appropriate mesh and problem size for the problem they > want to solve and added appropriate turbulence modeling (although my > general assumption is often violated). > > > And to confirm, are you doing a nonlinearly implicit velocity-pressure > solve? > > Yes, this is our default. > > A general question: it seems that it is well known that the quality of > selfp degrades with increasing advection. Why is that? > > On Wed, Jun 7, 2023 at 8:01?PM Jed Brown wrote: > >> Alexander Lindsay writes: >> >> > This has been a great discussion to follow. Regarding >> > >> >> when time stepping, you have enough mass matrix that cheaper >> preconditioners are good enough >> > >> > I'm curious what some algebraic recommendations might be for high Re in >> > transients. >> >> What mesh aspect ratio and streamline CFL number? Assuming your model is >> turbulent, can you say anything about momentum thickness Reynolds number >> Re_?? What is your wall normal spacing in plus units? (Wall resolved or >> wall modeled?) >> >> And to confirm, are you doing a nonlinearly implicit velocity-pressure >> solve? >> >> > I've found one-level DD to be ineffective when applied monolithically >> or to the momentum block of a split, as it scales with the mesh size. >> >> I wouldn't put too much weight on "scaling with mesh size" per se. You >> want an efficient solver for the coarsest mesh that delivers sufficient >> accuracy in your flow regime. Constants matter. >> >> Refining the mesh while holding time steps constant changes the advective >> CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful >> scaling study is to increase Reynolds number (e.g., by growing the domain) >> while keeping mesh size matched in terms of plus units in the viscous >> sublayer and Kolmogorov length in the outer boundary layer. That turns out >> to not be a very automatic study to do, but it's what matters and you can >> spend a lot of time chasing ghosts with naive scaling studies. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Fri Jun 23 14:39:59 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Fri, 23 Jun 2023 12:39:59 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> Message-ID: Ah, I see that if I use Pierre's new 'full' option for -mat_schur_complement_ainv_type that I get a single iteration for the Schur complement solve with LU. That's a nice testing option On Fri, Jun 23, 2023 at 12:02?PM Alexander Lindsay wrote: > I guess it is because the inverse of the diagonal form of A00 becomes a > poor representation of the inverse of A00? I guess naively I would have > thought that the blockdiag form of A00 is A00 > > On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> Hi Jed, I will come back with answers to all of your questions at some >> point. I mostly just deal with MOOSE users who come to me and tell me their >> solve is converging slowly, asking me how to fix it. So I generally assume >> they have built an appropriate mesh and problem size for the problem they >> want to solve and added appropriate turbulence modeling (although my >> general assumption is often violated). >> >> > And to confirm, are you doing a nonlinearly implicit velocity-pressure >> solve? >> >> Yes, this is our default. >> >> A general question: it seems that it is well known that the quality of >> selfp degrades with increasing advection. Why is that? >> >> On Wed, Jun 7, 2023 at 8:01?PM Jed Brown wrote: >> >>> Alexander Lindsay writes: >>> >>> > This has been a great discussion to follow. Regarding >>> > >>> >> when time stepping, you have enough mass matrix that cheaper >>> preconditioners are good enough >>> > >>> > I'm curious what some algebraic recommendations might be for high Re in >>> > transients. >>> >>> What mesh aspect ratio and streamline CFL number? Assuming your model is >>> turbulent, can you say anything about momentum thickness Reynolds number >>> Re_?? What is your wall normal spacing in plus units? (Wall resolved or >>> wall modeled?) >>> >>> And to confirm, are you doing a nonlinearly implicit velocity-pressure >>> solve? >>> >>> > I've found one-level DD to be ineffective when applied monolithically >>> or to the momentum block of a split, as it scales with the mesh size. >>> >>> I wouldn't put too much weight on "scaling with mesh size" per se. You >>> want an efficient solver for the coarsest mesh that delivers sufficient >>> accuracy in your flow regime. Constants matter. >>> >>> Refining the mesh while holding time steps constant changes the >>> advective CFL number as well as cell Peclet/cell Reynolds numbers. A >>> meaningful scaling study is to increase Reynolds number (e.g., by growing >>> the domain) while keeping mesh size matched in terms of plus units in the >>> viscous sublayer and Kolmogorov length in the outer boundary layer. That >>> turns out to not be a very automatic study to do, but it's what matters and >>> you can spend a lot of time chasing ghosts with naive scaling studies. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.jolivet at lip6.fr Fri Jun 23 15:06:53 2023 From: pierre.jolivet at lip6.fr (Pierre Jolivet) Date: Fri, 23 Jun 2023 22:06:53 +0200 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> Message-ID: > On 23 Jun 2023, at 9:39 PM, Alexander Lindsay wrote: > > Ah, I see that if I use Pierre's new 'full' option for -mat_schur_complement_ainv_type That was not initially done by me (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit to use KSPMatSolve(), so that if you have a small Schur complement ? which is not really the case for NS ? this could be a viable option, it was previously painfully slow). Thanks, Pierre > that I get a single iteration for the Schur complement solve with LU. That's a nice testing option > > On Fri, Jun 23, 2023 at 12:02?PM Alexander Lindsay > wrote: >> I guess it is because the inverse of the diagonal form of A00 becomes a poor representation of the inverse of A00? I guess naively I would have thought that the blockdiag form of A00 is A00 >> >> On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay > wrote: >>> Hi Jed, I will come back with answers to all of your questions at some point. I mostly just deal with MOOSE users who come to me and tell me their solve is converging slowly, asking me how to fix it. So I generally assume they have built an appropriate mesh and problem size for the problem they want to solve and added appropriate turbulence modeling (although my general assumption is often violated). >>> >>> > And to confirm, are you doing a nonlinearly implicit velocity-pressure solve? >>> >>> Yes, this is our default. >>> >>> A general question: it seems that it is well known that the quality of selfp degrades with increasing advection. Why is that? >>> >>> On Wed, Jun 7, 2023 at 8:01?PM Jed Brown > wrote: >>>> Alexander Lindsay > writes: >>>> >>>> > This has been a great discussion to follow. Regarding >>>> > >>>> >> when time stepping, you have enough mass matrix that cheaper preconditioners are good enough >>>> > >>>> > I'm curious what some algebraic recommendations might be for high Re in >>>> > transients. >>>> >>>> What mesh aspect ratio and streamline CFL number? Assuming your model is turbulent, can you say anything about momentum thickness Reynolds number Re_?? What is your wall normal spacing in plus units? (Wall resolved or wall modeled?) >>>> >>>> And to confirm, are you doing a nonlinearly implicit velocity-pressure solve? >>>> >>>> > I've found one-level DD to be ineffective when applied monolithically or to the momentum block of a split, as it scales with the mesh size. >>>> >>>> I wouldn't put too much weight on "scaling with mesh size" per se. You want an efficient solver for the coarsest mesh that delivers sufficient accuracy in your flow regime. Constants matter. >>>> >>>> Refining the mesh while holding time steps constant changes the advective CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful scaling study is to increase Reynolds number (e.g., by growing the domain) while keeping mesh size matched in terms of plus units in the viscous sublayer and Kolmogorov length in the outer boundary layer. That turns out to not be a very automatic study to do, but it's what matters and you can spend a lot of time chasing ghosts with naive scaling studies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.jolivet at lip6.fr Fri Jun 23 15:09:01 2023 From: pierre.jolivet at lip6.fr (Pierre Jolivet) Date: Fri, 23 Jun 2023 22:09:01 +0200 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> Message-ID: <15FFDCF6-48C9-4331-A9FE-932BBDD418D1@lip6.fr> > On 23 Jun 2023, at 10:06 PM, Pierre Jolivet wrote: > > >> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay wrote: >> >> Ah, I see that if I use Pierre's new 'full' option for -mat_schur_complement_ainv_type > > That was not initially done by me Oops, sorry for the noise, looks like it was done by me indeed in 9399e4fd88c6621aad8fe9558ce84df37bd6fada? Thanks, Pierre > (though I recently tweaked MatSchurComplementComputeExplicitOperator() a bit to use KSPMatSolve(), so that if you have a small Schur complement ? which is not really the case for NS ? this could be a viable option, it was previously painfully slow). > > Thanks, > Pierre > >> that I get a single iteration for the Schur complement solve with LU. That's a nice testing option >> >> On Fri, Jun 23, 2023 at 12:02?PM Alexander Lindsay > wrote: >>> I guess it is because the inverse of the diagonal form of A00 becomes a poor representation of the inverse of A00? I guess naively I would have thought that the blockdiag form of A00 is A00 >>> >>> On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay > wrote: >>>> Hi Jed, I will come back with answers to all of your questions at some point. I mostly just deal with MOOSE users who come to me and tell me their solve is converging slowly, asking me how to fix it. So I generally assume they have built an appropriate mesh and problem size for the problem they want to solve and added appropriate turbulence modeling (although my general assumption is often violated). >>>> >>>> > And to confirm, are you doing a nonlinearly implicit velocity-pressure solve? >>>> >>>> Yes, this is our default. >>>> >>>> A general question: it seems that it is well known that the quality of selfp degrades with increasing advection. Why is that? >>>> >>>> On Wed, Jun 7, 2023 at 8:01?PM Jed Brown > wrote: >>>>> Alexander Lindsay > writes: >>>>> >>>>> > This has been a great discussion to follow. Regarding >>>>> > >>>>> >> when time stepping, you have enough mass matrix that cheaper preconditioners are good enough >>>>> > >>>>> > I'm curious what some algebraic recommendations might be for high Re in >>>>> > transients. >>>>> >>>>> What mesh aspect ratio and streamline CFL number? Assuming your model is turbulent, can you say anything about momentum thickness Reynolds number Re_?? What is your wall normal spacing in plus units? (Wall resolved or wall modeled?) >>>>> >>>>> And to confirm, are you doing a nonlinearly implicit velocity-pressure solve? >>>>> >>>>> > I've found one-level DD to be ineffective when applied monolithically or to the momentum block of a split, as it scales with the mesh size. >>>>> >>>>> I wouldn't put too much weight on "scaling with mesh size" per se. You want an efficient solver for the coarsest mesh that delivers sufficient accuracy in your flow regime. Constants matter. >>>>> >>>>> Refining the mesh while holding time steps constant changes the advective CFL number as well as cell Peclet/cell Reynolds numbers. A meaningful scaling study is to increase Reynolds number (e.g., by growing the domain) while keeping mesh size matched in terms of plus units in the viscous sublayer and Kolmogorov length in the outer boundary layer. That turns out to not be a very automatic study to do, but it's what matters and you can spend a lot of time chasing ghosts with naive scaling studies. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From psteb at bobolin.com.pl Fri Jun 23 16:30:52 2023 From: psteb at bobolin.com.pl (=?UTF-8?B?UGF3ZcWCIFN0ZWJsacWEc2tp?=) Date: Fri, 23 Jun 2023 23:30:52 +0200 Subject: [petsc-users] Petsc 3.19.2 to 3.18.0 error possibility. Message-ID: I am micromagnetic (MAGPAR) software developer. Old Magpar version has been using petsc 3.1-p8. I have decided to upgrade to petsc 3.19.2 with avx512 support. Unfortunately there appeared an error during software testing. Error appeares in ranning code after proper compiling and linking. The bug is in a code part which initializes matrix in petsc library versions: 3.19.2, 3.19.1, 3.19.0 and also from 3.18.5 to 3.18.0. If we use petsc version 3.17.5 the error doesn't appear. With this version (3.17.5) all is ok and simulation is running without any errors or throwing exceptions. My guess is linked to avx512 implementation which is good up to 3.17.5 version and buggy in upper mentioned versions with higher numbers. Avx512 is buggy according to tested SeqAij matrices. The exception is not thrown if we comment code fragment below. ierr = MatCreateSeqAIJ( ??? PETSC_COMM_SELF, ??? nvert,nvert, ??? 0,ia, ??? &mat ? );CHKERRQ(ierr); ? ierr = MatSetFromOptions(mat);CHKERRQ(ierr); ia - is number of nonzeros array which is obtained according parmetis partitioning. There were the same version of parmetis? (3.1.1) in the all considered cases. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 23 16:41:07 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 23 Jun 2023 17:41:07 -0400 Subject: [petsc-users] Petsc 3.19.2 to 3.18.0 error possibility. In-Reply-To: References: Message-ID: Could you send us the exact error output that occurs? Cut and paste the run command and the entire error message. Also send the configure options you used. Have you tried configuring the later PETSc versions with all optimization turned off; use --with-debugging=1 --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' Does the same error occur? Barry > On Jun 23, 2023, at 5:30 PM, Pawe? Stebli?ski via petsc-users wrote: > > I am micromagnetic (MAGPAR) software developer. Old Magpar version has been using petsc 3.1-p8. I have decided to upgrade to petsc 3.19.2 with avx512 support. Unfortunately there appeared an error during software testing. Error appeares in ranning code after proper compiling and linking. The bug is in a code part which initializes matrix in petsc library versions: 3.19.2, 3.19.1, 3.19.0 and also from 3.18.5 to 3.18.0. If we use petsc version 3.17.5 the error doesn't appear. With this version (3.17.5) all is ok and simulation is running without any errors or throwing exceptions. My guess is linked to avx512 implementation which is good up to 3.17.5 version and buggy in upper mentioned versions with higher numbers. Avx512 is buggy according to tested SeqAij matrices. > > The exception is not thrown if we comment code fragment below. > > ierr = MatCreateSeqAIJ( > PETSC_COMM_SELF, > nvert,nvert, > 0,ia, > &mat > );CHKERRQ(ierr); > ierr = MatSetFromOptions(mat);CHKERRQ(ierr); > > ia - is number of nonzeros array which is obtained according parmetis partitioning. There were the same version of parmetis (3.1.1) in the all considered cases. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Fri Jun 23 21:12:28 2023 From: liufield at gmail.com (neil liu) Date: Fri, 23 Jun 2023 22:12:28 -0400 Subject: [petsc-users] Inquiry about PetscDTSimplexQuadrature . In-Reply-To: References: Message-ID: Thanks, Matt. It seems DMPlexComputeCellGeometryFEM works well with the quadrature points to deliver Jacobian and inverse one. Will it be a good choice ? Have a good night. Thanks, On Fri, Jun 23, 2023 at 10:33?AM Matthew Knepley wrote: > On Fri, Jun 23, 2023 at 10:28?AM neil liu wrote: > >> Dear Petsc developers, >> >> I am learning *PetscDTSimplexQuadrature *and plan to use it. >> I found that, in the biunit simplex (tetra), >> (-1,-1,-1),(1,-1,-1),(-1,1,-1),(-1,-1,1), >> degree 1: npoints 4, the sum of weights = 4/3(the volume of this simplex) >> degree 2 : npoints 8; >> For my previous experience, I used Gauss quadrature rules, (npoints =4 , >> 5, 11, 15). >> Then I am curious what rule is Petsc using ? >> > > There are two supported types: > > - Stroud Conical Quadrature > > - Minimal Symmetric Quadrature > > I think we prefer symmetric when it is available. > > >> And is *PetscDTSimplexQuadrature *used by PetscFE? >> > > Yes. > > Thanks, > > Matt > > >> >> Thanks a lot, >> >> Xiaodong >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jun 24 07:54:06 2023 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 24 Jun 2023 08:54:06 -0400 Subject: [petsc-users] Inquiry about PetscDTSimplexQuadrature . In-Reply-To: References: Message-ID: On Fri, Jun 23, 2023 at 10:12?PM neil liu wrote: > Thanks, Matt. > It seems DMPlexComputeCellGeometryFEM works well with the quadrature > points to deliver Jacobian and inverse one. > Will it be a good choice ? > Yes, it is intended to compute these quantities for you. Let me know if it does not do what you want. Thanks, Matt > Have a good night. > > Thanks, > > On Fri, Jun 23, 2023 at 10:33?AM Matthew Knepley > wrote: > >> On Fri, Jun 23, 2023 at 10:28?AM neil liu wrote: >> >>> Dear Petsc developers, >>> >>> I am learning *PetscDTSimplexQuadrature *and plan to use it. >>> I found that, in the biunit simplex (tetra), >>> (-1,-1,-1),(1,-1,-1),(-1,1,-1),(-1,-1,1), >>> degree 1: npoints 4, the sum of weights = 4/3(the volume of this simplex) >>> degree 2 : npoints 8; >>> For my previous experience, I used Gauss quadrature rules, (npoints =4 , >>> 5, 11, 15). >>> Then I am curious what rule is Petsc using ? >>> >> >> There are two supported types: >> >> - Stroud Conical Quadrature >> >> - Minimal Symmetric Quadrature >> >> I think we prefer symmetric when it is available. >> >> >>> And is *PetscDTSimplexQuadrature *used by PetscFE? >>> >> >> Yes. >> >> Thanks, >> >> Matt >> >> >>> >>> Thanks a lot, >>> >>> Xiaodong >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Jun 24 09:29:51 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 24 Jun 2023 10:29:51 -0400 Subject: [petsc-users] Petsc 3.19.2 to 3.18.0 error possibility. In-Reply-To: References: Message-ID: <85018E79-AD5C-443D-A3E3-20059503E5AB@petsc.dev> Search through all your code looking for calls to VecGetArray()/VecRestoreArray(). For all uses where you only need to read from the array, replace the calls with VecGetArrayRead()/VecRestoreArrayRead(). And add the const modifier to the array declaration. > On Jun 24, 2023, at 5:32 AM, Pawe? Stebli?ski wrote: > > Welcome > > If library nr 3.17.5 is compiled --with-debuging=yes there apears an error. This exception is not thrown, and all system seems to work well when one compiles --with-debugging=no. > > ------ Error Message -------------------------------------------------------------- > [79]PETSC ERROR: Object is in wrong state > [79]PETSC ERROR: Vec is already locked for read-only or read/write access, argument # 1 > [79]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > [79]PETSC ERROR: Petsc Release Version 3.17.5, Sep 30, 2022 > [79]PETSC ERROR: /home/psteb/PRACA/KODY/SYMETRYLLBINTERTWOMODELS_NEW3.9/src/magpar.exe on a PETSc-config-magpar named magmain-1 by root Sat Jun 24 11:18:46 2023 > [79]PETSC ERROR: Configure options --with-precision=double --download-fblaslapack=1 --with-avx512-kernels=1 --with-mpi-dir=/home/psteb/PRACA/LIB_NEW3.9/../LIB_NEW/mpi --with-x=0 --with-clanguage=C++ --with-debugging=yes PETSC_ARCH=PETSc-config-magpar > [79]PETSC ERROR: #1 VecSetErrorIfLocked() at /home/psteb/PRACA/LIB_NEW3.9/petsc-3.17.5.DEBUG/include/petscvec.h:618 > [79]PETSC ERROR: #2 VecGetArray() at /home/psteb/PRACA/LIB_NEW3.9/petsc-3.17.5.DEBUG/src/vec/vec/interface/rvector0e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 > 9.930000e-04 0.000000e+00 3.110177e+00 0.000000e+00 2.353828e-45 0.000000e+00 0.000000e+00 0.000000e+00 3.000000e+02 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 -7.952733e-40 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 > 9.940000e-04 0.000000e+00 3.110177e+00 0.000000e+00 1.558555e-45 0.000000e+00 0.000000e+00 0.000000e+00 3.000000e+02 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 -5.271514e-40 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 > 9.950000e-04 0.000000e+00 3.110177e+00 0.000000e+00 1.031404e-45 0.000000e+00 0.000000e+00 0.000000e+00 3.000000e+02 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 -3.492309e-40 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00]PETSC ERROR: #3 JAC3() at /home/psteb/PRACA/KODY/SYMETRYLLBINTERTWOMODELS_NEW3.9/src/llb/countmeq.c:161 > [0]PETSC ERROR: #4 TaoComputeJacobian() at /home/psteb/PRACA/LIB_NEW3.9/petsc-3.17.5.DEBUG/src/tao/interface/taosolver_hj.c:316 > [0]PETSC ERROR: #5 Tao_SSLS_FunctionGradient() at /home/psteb/PRACA/LIB_NEW3.9/petsc-3.17.5.DEBUG/src/tao/complementarity/impls/ssls/ssls.c:51 > [0]PETSC ERROR: #6 TaoLineSearchComputeObjectiveAndGradient() at /home/psteb/PRACA/LIB_NEW3.9/petsc-3.17.5.DEBUG/src/tao/linesearch/interface/taolinesearch.c:938 > [0]PETSC ERROR: #7 TaoSolve_SSILS() at /home/psteb/PRACA/LIB_NEW3.9/petsc-3.17.5.DEBUG/src/tao/complementarity/impls/ssls/ssils.c:54 > [0]PETSC ERROR: #8 TaoSolve() at /home/psteb/PRACA/LIB_NEW3.9/petsc-3.17.5.DEBUG/src/tao/interface/taosolver.c:136 > [0]PETSC ERROR: #9 createMeq() at /home/psteb/PRACA/KODY/SYMETRYLLBINTERTWOMODELS_NEW3.9/src/llb/countmeq.c:259 > [0]PETSC ERROR: #10 countMeq() at /home/psteb/PRACA/KODY/SYMETRYLLBINTERTWOMODELS_NEW3.9/src/llb/countmeq.c:296 > [0]PETSC ERRO[server]: PMIU_parse_keyvals: unexpected key delimiter at character 48 in cmd > > > For petsc nr from 3.19.2 to 3.18.0 for options > > --with-debugging=1 --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' > > or > > without these options, during execution > > There is an error: > > [6]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [6]PETSC ERROR: Object is in wrong state > [6]PETSC ERROR: Not for unassembled vector, did you call VecAssemblyBegin()/VecAssemblyEnd()? > [6]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! > [6]PETSC ERROR: Option left: name:-addcalorics value: 0 source: command line > [6]PETSC ERROR: Option left: name:-addhtomeq value: 0 source: command line > [6]PETSC ERROR: Option left: name:-addjtog value: 0 source: command line > [6]PETSC ERROR: Option left: name:-addterm value: 0 source: command line > [6]PETSC ERROR: Option left: name:-condinp_j value: 1e99 source: file > [6]PETSC ERROR: Option left: name:-condinp_t value: 1e-4 source: command line > [6]PETSC ERROR: Option left: name:-countG value: 0 source: command line > [6]PETSC ERROR: Option left: name:-countN value: 0 source: file > [6]PETSC ERROR: Option lef[8]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > > > > > > > W dniu 23.06.2023 o 23:41, Barry Smith pisze: >> >> Could you send us the exact error output that occurs? Cut and paste the run command and the entire error message. >> >> Also send the configure options you used. Have you tried configuring the later PETSc versions with all optimization turned off; use --with-debugging=1 --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' Does the same error occur? >> >> Barry >> >> >> >> >>> On Jun 23, 2023, at 5:30 PM, Pawe? Stebli?ski via petsc-users wrote: >>> >>> I am micromagnetic (MAGPAR) software developer. Old Magpar version has been using petsc 3.1-p8. I have decided to upgrade to petsc 3.19.2 with avx512 support. Unfortunately there appeared an error during software testing. Error appeares in ranning code after proper compiling and linking. The bug is in a code part which initializes matrix in petsc library versions: 3.19.2, 3.19.1, 3.19.0 and also from 3.18.5 to 3.18.0. If we use petsc version 3.17.5 the error doesn't appear. With this version (3.17.5) all is ok and simulation is running without any errors or throwing exceptions. My guess is linked to avx512 implementation which is good up to 3.17.5 version and buggy in upper mentioned versions with higher numbers. Avx512 is buggy according to tested SeqAij matrices. >>> >>> The exception is not thrown if we comment code fragment below. >>> >>> ierr = MatCreateSeqAIJ( >>> PETSC_COMM_SELF, >>> nvert,nvert, >>> 0,ia, >>> &mat >>> );CHKERRQ(ierr); >>> ierr = MatSetFromOptions(mat);CHKERRQ(ierr); >>> >>> ia - is number of nonzeros array which is obtained according parmetis partitioning. There were the same version of parmetis (3.1.1) in the all considered cases. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Jun 24 09:32:55 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 24 Jun 2023 10:32:55 -0400 Subject: [petsc-users] Petsc 3.19.2 to 3.18.0 error possibility. In-Reply-To: <85f85fcb-2d80-c526-850a-af2654841b1b@bobolin.com.pl> References: <85f85fcb-2d80-c526-850a-af2654841b1b@bobolin.com.pl> Message-ID: <97F06775-EF0D-4720-BB14-BAE52182267C@petsc.dev> Look for places where you call VecSetValues(), make sure that after the call before you use the vector for some other use you call VecAssemblyBegin/VecAssemblyEnd > On Jun 23, 2023, at 6:57 PM, Pawe? Stebli?ski wrote: > > I'm using petsc 3.19.2 --with-debuging=yes (generated executable a little bigger than with no option). MPICH2 4.1.1. also --download-fblaslapack=1, --with-clanguage=C++, --with-mpi-dir=pathtompi, --with-x=0, --with-precision=DOUBLE, > > I have changed in PETSC_ARCH directory in file petscvariables: CLANGUAGE = from CXX to C due to fact that I am using CFLAGS variable indicating Include files in Makefile > > running path: > > nice -n -20 pathtompi/bin/mpiexec.gforker -np 80 pathtomagpar/src/magpar.exe $params > > OUTPUT (This output, due to distributed processing appears long after point of seqaij matrix init.): > > =================================================================================(START) > > > > [6]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [6]PETSC ERROR: Object is in wrong state > [6]PETSC ERROR: Not for unassembled vector, did you call VecAssemblyBegin()/VecAssemblyEnd()? > [6]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! > [6]PETSC ERROR: Option left: name:-addcalorics value: 0 source: command line > [6]PETSC ERROR: Option left: name:-addhtomeq value: 0 source: command line > [6]PETSC ERROR: Option left: name:-addjtog value: 0 source: command line > [6]PETSC ERROR: Option left: name:-addterm value: 0 source: command line > [6]PETSC ERROR: Option left: name:-condinp_j value: 1e99 source: file > [6]PETSC ERROR: Option left: name:-condinp_t value: 1e-4 source: command line > [6]PETSC ERROR: Option left: name:-countG value: 0 source: command line > [6]PETSC ERROR: Option left: name:-countN value: 0 source: file > [6]PETSC ERROR: Option lef[8]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > > > > ================================================================================(END) > > > > > > when you set: --with-debuging=no and add to code: > > ierr = PetscMallocSetDebug(PETSC_TRUE,PETSC_TRUE);CHKERRQ(ierr); in main.c file before matrix initialization. > > You get OUTPUT BELOW: (about the point where function initialize matrix). > > When you comment matrix initialization the exception doesn't appear. > > > [0]PETSC ERROR: PetscTrFreeDefault() called from PetscEventRegLogRegister() at /home/psteb/PRACA/LIB_NEW/petsc-3.19.2/src/sys/logging/utils/eventlog.c:363 > [0]PETSC ERROR: Block at address 0x55a6d104bbb0 is corrupted; cannot free; > may be block not allocated with PetscMalloc() > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Memory corruption: https://petsc.org/release/faq/#valgrind > [0]PETSC ERROR: Bad location or corrupted memory > [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! > [0]PETSC ERROR: Option left: name:-addcalorics value: 0 source: command line > [0]PETSC ERROR: Option left: name:-addhtomeq value: 0 source: command line > [0]PETSC ERROR: Option left: name:-addjtog value: 0 source: command line > [0]PETSC ERROR: Option left: name:-addterm value: 0 source: command line > [0]PETSC ERROR: Option left: name:-condinp_j value: 1e99 source: file > [0]PETSC ERROR: Option left: name:-condinp_t value: 1e-4 source: command line > [0]PETSC ERROR: Option left: name:-countG value: 0 source: command line > [0]PETSC ERROR: Option left: name:-countN value: 0 source: file > [0]PETSC ERROR: Option left: name:-D value: 20.0 source: file > [0]PETSC ERROR: Option left: name:-demag value: 0 source: command line > [0]PETSC ERROR: Option left: name:-dmi value: 2 source: command line > [0]PETSC ERROR: Option left: name:-dmi_xyzfile value: dmi.xyz source[server]: PMIU_parse_keyvals: unexpected key delimiter at character 48 in cmd > > > IN POLAND is midnight so I go sleep. Tommorow i will check --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' > > Best Regards > > Paul Steblinski > > > > > > W dniu 23.06.2023 o 23:41, Barry Smith pisze: >> >> Could you send us the exact error output that occurs? Cut and paste the run command and the entire error message. >> >> Also send the configure options you used. Have you tried configuring the later PETSc versions with all optimization turned off; use --with-debugging=1 --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' Does the same error occur? >> >> Barry >> >> >> >> >>> On Jun 23, 2023, at 5:30 PM, Pawe? Stebli?ski via petsc-users wrote: >>> >>> I am micromagnetic (MAGPAR) software developer. Old Magpar version has been using petsc 3.1-p8. I have decided to upgrade to petsc 3.19.2 with avx512 support. Unfortunately there appeared an error during software testing. Error appeares in ranning code after proper compiling and linking. The bug is in a code part which initializes matrix in petsc library versions: 3.19.2, 3.19.1, 3.19.0 and also from 3.18.5 to 3.18.0. If we use petsc version 3.17.5 the error doesn't appear. With this version (3.17.5) all is ok and simulation is running without any errors or throwing exceptions. My guess is linked to avx512 implementation which is good up to 3.17.5 version and buggy in upper mentioned versions with higher numbers. Avx512 is buggy according to tested SeqAij matrices. >>> >>> The exception is not thrown if we comment code fragment below. >>> >>> ierr = MatCreateSeqAIJ( >>> PETSC_COMM_SELF, >>> nvert,nvert, >>> 0,ia, >>> &mat >>> );CHKERRQ(ierr); >>> ierr = MatSetFromOptions(mat);CHKERRQ(ierr); >>> >>> ia - is number of nonzeros array which is obtained according parmetis partitioning. There were the same version of parmetis (3.1.1) in the all considered cases. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Jun 24 12:08:04 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 24 Jun 2023 13:08:04 -0400 Subject: [petsc-users] Petsc 3.19.2 to 3.18.0 error possibility. In-Reply-To: <0cf7c73b-e2b5-07fd-4658-6cd239d15bd5@bobolin.com.pl> References: <85f85fcb-2d80-c526-850a-af2654841b1b@bobolin.com.pl> <97F06775-EF0D-4720-BB14-BAE52182267C@petsc.dev> <0cf7c73b-e2b5-07fd-4658-6cd239d15bd5@bobolin.com.pl> Message-ID: Older versions of PETSc have generally less error checking and debug versions have more error checking than optimized version. Once you update the code for newer versions it should work clearly for both debug and optimized versions of PETSc. Barry > On Jun 24, 2023, at 12:08 PM, Pawe? Stebli?ski wrote: > > Welcome > > Thank You for Your answer. > > I understand the error. When You setValues in Vector, it must be assembled. The same in matrices. But exception is not thrown in 3.17.5 (no debuging switch) and is thrown from 3.18.0 to 3.19.2 (with or without debugging switch). Error is not thrown in old version of petsc 3.1-p8. > > Moreover the exception is thrown in 3.17.5 version ( --with-debuging=yes) but is not thrown in the version with switch --with-debuging=no. Don't You think that it doesn't matter which version is considered: with or without debbuging. The exception should be thrown in both cases. > > If some of the vectores were not assembled it should be thrown exception in older version 3.1-p8. But there is no exception in older version. > > The code adjustment which is compatible to new petsc was not done in such fundamentals details like vector acssembling. The programmer should mostly change the names of variables and functions from old to new. > > Best regards > > Paul > > > > W dniu 24.06.2023 o 16:32, Barry Smith pisze: >> >> Look for places where you call VecSetValues(), make sure that after the call before you use the vector for some other use you call VecAssemblyBegin/VecAssemblyEnd >> >>> On Jun 23, 2023, at 6:57 PM, Pawe? Stebli?ski wrote: >>> >>> I'm using petsc 3.19.2 --with-debuging=yes (generated executable a little bigger than with no option). MPICH2 4.1.1. also --download-fblaslapack=1, --with-clanguage=C++, --with-mpi-dir=pathtompi, --with-x=0, --with-precision=DOUBLE, >>> >>> I have changed in PETSC_ARCH directory in file petscvariables: CLANGUAGE = from CXX to C due to fact that I am using CFLAGS variable indicating Include files in Makefile >>> >>> running path: >>> >>> nice -n -20 pathtompi/bin/mpiexec.gforker -np 80 pathtomagpar/src/magpar.exe $params >>> >>> OUTPUT (This output, due to distributed processing appears long after point of seqaij matrix init.): >>> >>> =================================================================================(START) >>> >>> >>> >>> [6]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [6]PETSC ERROR: Object is in wrong state >>> [6]PETSC ERROR: Not for unassembled vector, did you call VecAssemblyBegin()/VecAssemblyEnd()? >>> [6]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! >>> [6]PETSC ERROR: Option left: name:-addcalorics value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-addhtomeq value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-addjtog value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-addterm value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-condinp_j value: 1e99 source: file >>> [6]PETSC ERROR: Option left: name:-condinp_t value: 1e-4 source: command line >>> [6]PETSC ERROR: Option left: name:-countG value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-countN value: 0 source: file >>> [6]PETSC ERROR: Option lef[8]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> >>> >>> >>> ================================================================================(END) >>> >>> >>> >>> >>> >>> when you set: --with-debuging=no and add to code: >>> >>> ierr = PetscMallocSetDebug(PETSC_TRUE,PETSC_TRUE);CHKERRQ(ierr); in main.c file before matrix initialization. >>> >>> You get OUTPUT BELOW: (about the point where function initialize matrix). >>> >>> When you comment matrix initialization the exception doesn't appear. >>> >>> >>> [0]PETSC ERROR: PetscTrFreeDefault() called from PetscEventRegLogRegister() at /home/psteb/PRACA/LIB_NEW/petsc-3.19.2/src/sys/logging/utils/eventlog.c:363 >>> [0]PETSC ERROR: Block at address 0x55a6d104bbb0 is corrupted; cannot free; >>> may be block not allocated with PetscMalloc() >>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [0]PETSC ERROR: Memory corruption: https://petsc.org/release/faq/#valgrind >>> [0]PETSC ERROR: Bad location or corrupted memory >>> [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! >>> [0]PETSC ERROR: Option left: name:-addcalorics value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-addhtomeq value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-addjtog value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-addterm value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-condinp_j value: 1e99 source: file >>> [0]PETSC ERROR: Option left: name:-condinp_t value: 1e-4 source: command line >>> [0]PETSC ERROR: Option left: name:-countG value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-countN value: 0 source: file >>> [0]PETSC ERROR: Option left: name:-D value: 20.0 source: file >>> [0]PETSC ERROR: Option left: name:-demag value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-dmi value: 2 source: command line >>> [0]PETSC ERROR: Option left: name:-dmi_xyzfile value: dmi.xyz source[server]: PMIU_parse_keyvals: unexpected key delimiter at character 48 in cmd >>> >>> >>> IN POLAND is midnight so I go sleep. Tommorow i will check --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' >>> >>> Best Regards >>> >>> Paul Steblinski >>> >>> >>> >>> >>> >>> W dniu 23.06.2023 o 23:41, Barry Smith pisze: >>>> >>>> Could you send us the exact error output that occurs? Cut and paste the run command and the entire error message. >>>> >>>> Also send the configure options you used. Have you tried configuring the later PETSc versions with all optimization turned off; use --with-debugging=1 --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' Does the same error occur? >>>> >>>> Barry >>>> >>>> >>>> >>>> >>>>> On Jun 23, 2023, at 5:30 PM, Pawe? Stebli?ski via petsc-users wrote: >>>>> >>>>> I am micromagnetic (MAGPAR) software developer. Old Magpar version has been using petsc 3.1-p8. I have decided to upgrade to petsc 3.19.2 with avx512 support. Unfortunately there appeared an error during software testing. Error appeares in ranning code after proper compiling and linking. The bug is in a code part which initializes matrix in petsc library versions: 3.19.2, 3.19.1, 3.19.0 and also from 3.18.5 to 3.18.0. If we use petsc version 3.17.5 the error doesn't appear. With this version (3.17.5) all is ok and simulation is running without any errors or throwing exceptions. My guess is linked to avx512 implementation which is good up to 3.17.5 version and buggy in upper mentioned versions with higher numbers. Avx512 is buggy according to tested SeqAij matrices. >>>>> >>>>> The exception is not thrown if we comment code fragment below. >>>>> >>>>> ierr = MatCreateSeqAIJ( >>>>> PETSC_COMM_SELF, >>>>> nvert,nvert, >>>>> 0,ia, >>>>> &mat >>>>> );CHKERRQ(ierr); >>>>> ierr = MatSetFromOptions(mat);CHKERRQ(ierr); >>>>> >>>>> ia - is number of nonzeros array which is obtained according parmetis partitioning. There were the same version of parmetis (3.1.1) in the all considered cases. >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Jun 24 12:08:04 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 24 Jun 2023 13:08:04 -0400 Subject: [petsc-users] Petsc 3.19.2 to 3.18.0 error possibility. In-Reply-To: <0cf7c73b-e2b5-07fd-4658-6cd239d15bd5@bobolin.com.pl> References: <85f85fcb-2d80-c526-850a-af2654841b1b@bobolin.com.pl> <97F06775-EF0D-4720-BB14-BAE52182267C@petsc.dev> <0cf7c73b-e2b5-07fd-4658-6cd239d15bd5@bobolin.com.pl> Message-ID: Older versions of PETSc have generally less error checking and debug versions have more error checking than optimized version. Once you update the code for newer versions it should work clearly for both debug and optimized versions of PETSc. Barry > On Jun 24, 2023, at 12:08 PM, Pawe? Stebli?ski wrote: > > Welcome > > Thank You for Your answer. > > I understand the error. When You setValues in Vector, it must be assembled. The same in matrices. But exception is not thrown in 3.17.5 (no debuging switch) and is thrown from 3.18.0 to 3.19.2 (with or without debugging switch). Error is not thrown in old version of petsc 3.1-p8. > > Moreover the exception is thrown in 3.17.5 version ( --with-debuging=yes) but is not thrown in the version with switch --with-debuging=no. Don't You think that it doesn't matter which version is considered: with or without debbuging. The exception should be thrown in both cases. > > If some of the vectores were not assembled it should be thrown exception in older version 3.1-p8. But there is no exception in older version. > > The code adjustment which is compatible to new petsc was not done in such fundamentals details like vector acssembling. The programmer should mostly change the names of variables and functions from old to new. > > Best regards > > Paul > > > > W dniu 24.06.2023 o 16:32, Barry Smith pisze: >> >> Look for places where you call VecSetValues(), make sure that after the call before you use the vector for some other use you call VecAssemblyBegin/VecAssemblyEnd >> >>> On Jun 23, 2023, at 6:57 PM, Pawe? Stebli?ski wrote: >>> >>> I'm using petsc 3.19.2 --with-debuging=yes (generated executable a little bigger than with no option). MPICH2 4.1.1. also --download-fblaslapack=1, --with-clanguage=C++, --with-mpi-dir=pathtompi, --with-x=0, --with-precision=DOUBLE, >>> >>> I have changed in PETSC_ARCH directory in file petscvariables: CLANGUAGE = from CXX to C due to fact that I am using CFLAGS variable indicating Include files in Makefile >>> >>> running path: >>> >>> nice -n -20 pathtompi/bin/mpiexec.gforker -np 80 pathtomagpar/src/magpar.exe $params >>> >>> OUTPUT (This output, due to distributed processing appears long after point of seqaij matrix init.): >>> >>> =================================================================================(START) >>> >>> >>> >>> [6]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [6]PETSC ERROR: Object is in wrong state >>> [6]PETSC ERROR: Not for unassembled vector, did you call VecAssemblyBegin()/VecAssemblyEnd()? >>> [6]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! >>> [6]PETSC ERROR: Option left: name:-addcalorics value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-addhtomeq value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-addjtog value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-addterm value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-condinp_j value: 1e99 source: file >>> [6]PETSC ERROR: Option left: name:-condinp_t value: 1e-4 source: command line >>> [6]PETSC ERROR: Option left: name:-countG value: 0 source: command line >>> [6]PETSC ERROR: Option left: name:-countN value: 0 source: file >>> [6]PETSC ERROR: Option lef[8]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> >>> >>> >>> ================================================================================(END) >>> >>> >>> >>> >>> >>> when you set: --with-debuging=no and add to code: >>> >>> ierr = PetscMallocSetDebug(PETSC_TRUE,PETSC_TRUE);CHKERRQ(ierr); in main.c file before matrix initialization. >>> >>> You get OUTPUT BELOW: (about the point where function initialize matrix). >>> >>> When you comment matrix initialization the exception doesn't appear. >>> >>> >>> [0]PETSC ERROR: PetscTrFreeDefault() called from PetscEventRegLogRegister() at /home/psteb/PRACA/LIB_NEW/petsc-3.19.2/src/sys/logging/utils/eventlog.c:363 >>> [0]PETSC ERROR: Block at address 0x55a6d104bbb0 is corrupted; cannot free; >>> may be block not allocated with PetscMalloc() >>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [0]PETSC ERROR: Memory corruption: https://petsc.org/release/faq/#valgrind >>> [0]PETSC ERROR: Bad location or corrupted memory >>> [0]PETSC ERROR: WARNING! There are option(s) set that were not used! Could be the program crashed before they were used or a spelling mistake, etc! >>> [0]PETSC ERROR: Option left: name:-addcalorics value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-addhtomeq value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-addjtog value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-addterm value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-condinp_j value: 1e99 source: file >>> [0]PETSC ERROR: Option left: name:-condinp_t value: 1e-4 source: command line >>> [0]PETSC ERROR: Option left: name:-countG value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-countN value: 0 source: file >>> [0]PETSC ERROR: Option left: name:-D value: 20.0 source: file >>> [0]PETSC ERROR: Option left: name:-demag value: 0 source: command line >>> [0]PETSC ERROR: Option left: name:-dmi value: 2 source: command line >>> [0]PETSC ERROR: Option left: name:-dmi_xyzfile value: dmi.xyz source[server]: PMIU_parse_keyvals: unexpected key delimiter at character 48 in cmd >>> >>> >>> IN POLAND is midnight so I go sleep. Tommorow i will check --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' >>> >>> Best Regards >>> >>> Paul Steblinski >>> >>> >>> >>> >>> >>> W dniu 23.06.2023 o 23:41, Barry Smith pisze: >>>> >>>> Could you send us the exact error output that occurs? Cut and paste the run command and the entire error message. >>>> >>>> Also send the configure options you used. Have you tried configuring the later PETSc versions with all optimization turned off; use --with-debugging=1 --with-cflags='-g -O0' --with-cxxflags='-g -O0' --with-fflags='-g -O0' Does the same error occur? >>>> >>>> Barry >>>> >>>> >>>> >>>> >>>>> On Jun 23, 2023, at 5:30 PM, Pawe? Stebli?ski via petsc-users wrote: >>>>> >>>>> I am micromagnetic (MAGPAR) software developer. Old Magpar version has been using petsc 3.1-p8. I have decided to upgrade to petsc 3.19.2 with avx512 support. Unfortunately there appeared an error during software testing. Error appeares in ranning code after proper compiling and linking. The bug is in a code part which initializes matrix in petsc library versions: 3.19.2, 3.19.1, 3.19.0 and also from 3.18.5 to 3.18.0. If we use petsc version 3.17.5 the error doesn't appear. With this version (3.17.5) all is ok and simulation is running without any errors or throwing exceptions. My guess is linked to avx512 implementation which is good up to 3.17.5 version and buggy in upper mentioned versions with higher numbers. Avx512 is buggy according to tested SeqAij matrices. >>>>> >>>>> The exception is not thrown if we comment code fragment below. >>>>> >>>>> ierr = MatCreateSeqAIJ( >>>>> PETSC_COMM_SELF, >>>>> nvert,nvert, >>>>> 0,ia, >>>>> &mat >>>>> );CHKERRQ(ierr); >>>>> ierr = MatSetFromOptions(mat);CHKERRQ(ierr); >>>>> >>>>> ia - is number of nonzeros array which is obtained according parmetis partitioning. There were the same version of parmetis (3.1.1) in the all considered cases. >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From edoardo.alinovi at gmail.com Sun Jun 25 08:21:21 2023 From: edoardo.alinovi at gmail.com (Edoardo alinovi) Date: Sun, 25 Jun 2023 15:21:21 +0200 Subject: [petsc-users] Questions on CPR preconditioner Message-ID: Hello petsc's friends I have a curiosity about a sentence in the user guide about CPR preconditioner: *"**The Constrained Pressure Preconditioner (CPR) can be implemented using PCCOMPOSITE with PCGALERKIN . CPR first solves an * *RAP subsystem, updates the residual on all variables (PCCompositeSetType (pc,PC_COMPOSITE_MULTIPLICATIVE )), and then applies a simple ILU like preconditioner on all the variables.**"* I am certainly lacking some background here, would you be able to explain me a bit better how this is working? Do you have a minimal working example in the tutorials? Many thanks as always! :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Jun 25 08:25:08 2023 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 25 Jun 2023 09:25:08 -0400 Subject: [petsc-users] Questions on CPR preconditioner In-Reply-To: References: Message-ID: On Sun, Jun 25, 2023 at 9:21?AM Edoardo alinovi wrote: > Hello petsc's friends > > I have a curiosity about a sentence in the user guide about CPR > preconditioner: > > *"**The Constrained Pressure Preconditioner (CPR) can be implemented > using PCCOMPOSITE > with PCGALERKIN > . CPR first solves > an * > *RAP subsystem, updates the residual on all variables (PCCompositeSetType > (pc,PC_COMPOSITE_MULTIPLICATIVE > )), and then > applies a simple ILU like preconditioner on all the variables.**"* > > I am certainly lacking some background here, would you be able to > explain me a bit better how this is working? > First, you select out some (linear combination of a) subset of the unknowns, this is the action of R. Then you solve that system and projection the results back to the full space (solve RAP). After that, you use a simple ILU on the whole system, presumably because after that a diagonal-like PC is good. > Do you have a minimal working example in the tutorials? > I do not know. I think Barry implemented this. Thanks, Matt > Many thanks as always! :) > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From edoardo.alinovi at gmail.com Sun Jun 25 08:36:19 2023 From: edoardo.alinovi at gmail.com (Edoardo alinovi) Date: Sun, 25 Jun 2023 15:36:19 +0200 Subject: [petsc-users] Questions on CPR preconditioner In-Reply-To: References: Message-ID: Thanks Matt, This approach looks interesting, It would be great for me to have a look at a minimal example and try it out. Barry, are you there? ? Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jun 25 11:01:09 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 25 Jun 2023 12:01:09 -0400 Subject: [petsc-users] Questions on CPR preconditioner In-Reply-To: References: Message-ID: There is a thread https://lists.mcs.anl.gov/pipermail/petsc-dev/2019-March/023999.html here that contains some attached code (follow the links RL: ). > On Jun 25, 2023, at 9:36 AM, Edoardo alinovi wrote: > > Thanks Matt, > > This approach looks interesting, It would be great for me to have a look at a minimal example and try it out. Barry, are you there? ? > > Cheers > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edoardo.alinovi at gmail.com Sun Jun 25 12:40:24 2023 From: edoardo.alinovi at gmail.com (Edoardo alinovi) Date: Sun, 25 Jun 2023 19:40:24 +0200 Subject: [petsc-users] Questions on CPR preconditioner In-Reply-To: References: Message-ID: Hi Barry, thanks for pointing me out to that discussion! Unfortunately I am getting issues with this link: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190302/b0c1ad29/attachment.mht , any chance it is a dead one? Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jun 25 13:25:33 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 25 Jun 2023 14:25:33 -0400 Subject: [petsc-users] Questions on CPR preconditioner In-Reply-To: References: Message-ID: <8BCAAD32-9D39-4A54-82D2-FBECC1D83918@petsc.dev> It is pasted below > On Jun 25, 2023, at 1:40 PM, Edoardo alinovi wrote: > > Hi Barry, > > thanks for pointing me out to that discussion! Unfortunately I am getting issues with this link: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190302/b0c1ad29/attachment.mht , any chance it is a dead one? > > Cheers --====-=-= Content-Type: text/plain Content-Disposition: inline Ok, I have implemented the algorithm using PETSc PCCOMPOSITE and PCGALERKIN and get identical iterations as the code you sent. PCFIELDSPLIT is not intended for this type of solver composition. Here is the algorithm written in "two-step" form x_1/2 = P KSPSolve( R A P, using BoomerAMG) R b x = x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A x_1/2) PCCOMPOSITE with a type of multiplicative handles the two steps and PCGALERKIN handles the P KSPSolve(R A P) R business. You will need to use the master version of PETSc because I had to add a feature to PCGALERKIN to allow the solver to be easily used for and A that changes values for later solvers. Here is the output from -ksp_view KSP Object: 1 MPI processes type: fgmres GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: composite Composite PC type - MULTIPLICATIVE PCs on composite preconditioner follow --------------------------------- PC Object: (sub_0_) 1 MPI processes type: galerkin Galerkin PC KSP on Galerkin follow --------------------------------- KSP Object: (sub_0_galerkin_) 1 MPI processes type: richardson Richardson: damping factor=1. maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (sub_0_galerkin_) 1 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0. HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1. HYPRE BoomerAMG: Outer relax weight (all) 1. HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Not using more complex smoothers. HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=50, cols=50 total: nonzeros=244, allocated nonzeros=244 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=100, cols=100, bs=2 total: nonzeros=976, allocated nonzeros=100000 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 50 nodes, limit used is 5 PC Object: (sub_1_) 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1 HYPRE Pilut: drop tolerance 0.1 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=100, cols=100, bs=2 total: nonzeros=976, allocated nonzeros=100000 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 50 nodes, limit used is 5 Note that one can change the solvers on the two "stages" and all their options such as tolerances etc using the options database a proper prefixes which you can find in the output above. For example to use PETSc's ILU instead of hypre's just run with -sub_1_pc_type ilu or to use a direct solver instead of boomer -sub_0_galerkin_pc_type lu I've attached a new version of testmain2.c that runs your solver and also my version. Given the "unique" nature of your R = [ I I ... I] and P = [0 I 0 0 ... 0] I am not sure that it makes sense to include this preconditioner directly in PETSc as a new PC type; so if you are serious about using it you can take what I am sending back and modify it for your needs. As a numerical analyst who works on linear solvers I am also not convinced that this is likely to be a particular good preconditioner Let me know if you have any questions, Barry --====-=-= Content-Type: text/plain; name=testmain2.c Content-Disposition: attachment; filename=testmain2.c // make && mpirun -np 2 ./testmain2 -ksp_error_if_not_converged #include "MCPR.h" /* Computes the submatrix associated with the Galerkin subproblem Ap = R A P */ PetscErrorCode ComputeSubmatrix(PC pc,Mat A, Mat Ap, Mat *cAp,void *ctx) { PetscErrorCode ierr; PetscInt b,Am,An,start,end,first = 0, offset = 1; IS is,js; Mat Aij; PetscFunctionBegin; ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr); ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr); ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr); ierr = ISCreateStride(PetscObjectComm((PetscObject)A),Am/b,start+offset,b,&js);CHKERRQ(ierr); if (!Ap) { ierr = ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+0,b,&is);CHKERRQ(ierr); ierr = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Ap);CHKERRQ(ierr); ierr = ISDestroy(&is);CHKERRQ(ierr); *cAp = Ap; first = 1; } else { ierr = MatZeroEntries(Ap);CHKERRQ(ierr); } for(PetscInt k=first;k<b;++k) { ierr = ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+k,b,&is);CHKERRQ(ierr); ierr = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Aij);CHKERRQ(ierr); ierr = MatAXPY(Ap,1.0,Aij,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); ierr = MatDestroy(&Aij);CHKERRQ(ierr); ierr = ISDestroy(&is);CHKERRQ(ierr); } ierr = ISDestroy(&js);CHKERRQ(ierr); PetscFunctionReturn(0); } /* Apply the restriction operator for the Galkerin problem */ PetscErrorCode ApplyR(Mat A, Vec x,Vec y) { PetscErrorCode ierr; PetscInt b; PetscFunctionBegin; ierr = VecGetBlockSize(x,&b);CHKERRQ(ierr); ierr = VecStrideGather(x,0,y,INSERT_VALUES);CHKERRQ(ierr); for (PetscInt k=1;k<b;++k) {ierr = VecStrideGather(x,k,y,ADD_VALUES);CHKERRQ(ierr);} PetscFunctionReturn(0); } /* Apply the interpolation operator for the Galerkin problem */ PetscErrorCode ApplyP(Mat A, Vec x,Vec y) { PetscErrorCode ierr; PetscInt offset = 1; PetscFunctionBegin; ierr = VecStrideScatter(x,offset,y,INSERT_VALUES);CHKERRQ(ierr); PetscFunctionReturn(0); } int main( int argc, char *argv[] ) { PetscInitialize(&argc,&argv,PETSC_NULL,PETSC_NULL); int rank, size; MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process id */ MPI_Comm_size (MPI_COMM_WORLD, &size); /* get number of processes */ MPI_Comm C = PETSC_COMM_WORLD; PetscRandom rnd; PetscRandomCreate(C,&rnd); PetscRandomSetInterval(rnd,0.0,1.0); PetscRandomSetFromOptions(rnd); int M = 100; int N = size*M; Mat A = getSampleMatrix(M); Mat T1 = getT1(A,1); Mat T2 = getT2(A,0.1); Mat MCPR = getMCPR(A,T1,T2); Vec u,v,w,z; VecCreate(C,&u); VecSetBlockSize(u,2); VecSetSizes(u,M,N); VecSetFromOptions(u); VecDuplicate(u,&v); VecDuplicate(u,&w); VecDuplicate(u,&z); VecSetRandom(u,rnd); Mat mats[] = {T2,MCPR}; const char *names[] = {"T2","MCPR"}; for(int k=1;k<2;++k) { KSP solver; KSPCreate(C,&solver); KSPSetOperators(solver,A,A); KSPSetType(solver,KSPFGMRES); KSPGMRESSetRestart(solver,N); PC pc; KSPGetPC(solver,&pc); putMatrixInPC(pc,mats[k]); KSPSetFromOptions(solver); KSPSolve(solver,u,v); KSPConvergedReason reason; int its; KSPGetConvergedReason(solver,&reason); KSPGetIterationNumber(solver,&its); PetscPrintf(PETSC_COMM_WORLD,"testmain2: %s converged reason %d; iterations %d.\n",names[k],reason,its); KSPView(solver,PETSC_VIEWER_STDOUT_WORLD); KSPDestroy(&solver); } /* The preconditioner is x_1/2 = P KSPSolve( R A P, using BoomerAMG) R b x = x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A x_1/2) where the first line is implemented using PCGALERKIN with BoomerAMG on the subproblem so can be written as x_1/2 = PCApply(A, using PCGALERKIN with KSPSolve( R A P, using BoomerAMG) as the inner solver x = x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A x_1/2) Which is implemented using the PETSc PCCOMPOSITE preconditioner of type multiplicative so can be written as x = PCApply(A, using PCCOMPOSITE using (PCGALERKIN with KSPSolve( R A P, using BoomerAMG) as the inner solver) as the first solver and PCApply( A, using Hypre PILUT preconditioner) as the second solver) */ { PetscErrorCode ierr; KSP ksp; ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr); ierr = KSPSetType(ksp,KSPFGMRES);CHKERRQ(ierr); ierr = KSPGMRESSetRestart(ksp,100);CHKERRQ(ierr); ierr = KSPSetOperators(ksp,A,A);CHKERRQ(ierr); PC pc; ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr); ierr = PCSetType(pc,PCCOMPOSITE);CHKERRQ(ierr); ierr = PCCompositeSetType(pc,PC_COMPOSITE_MULTIPLICATIVE);CHKERRQ(ierr); /* Create first sub problem solver Hypre boomerAMG on Ap */ PC t1; ierr = PCCompositeAddPC(pc,PCGALERKIN);CHKERRQ(ierr); ierr = PCCompositeGetPC(pc,0,&t1);CHKERRQ(ierr); KSP Ap_ksp; ierr = PCGalerkinGetKSP(t1,&Ap_ksp);CHKERRQ(ierr); ierr = KSPSetType(Ap_ksp,KSPRICHARDSON);CHKERRQ(ierr); PC Ap_pc; ierr = KSPGetPC(Ap_ksp,&Ap_pc);CHKERRQ(ierr); ierr = PCSetType(Ap_pc,PCHYPRE);CHKERRQ(ierr); /* this tells the PC how to compute the reduced matrix */ ierr = PCGalerkinSetComputeSubmatrix(t1,ComputeSubmatrix,NULL);CHKERRQ(ierr); PetscInt b,Am,An; ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr); ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr); int start,end; ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr); /* create the R operator */ Mat R; ierr = MatCreateShell(PetscObjectComm((PetscObject)A),Am/b,An,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&R); ierr = MatShellSetOperation(R,MATOP_MULT,(void (*)(void))ApplyR);CHKERRQ(ierr); ierr = PCGalerkinSetRestriction(t1,R);CHKERRQ(ierr); /* create the P operator */ Mat P; ierr = MatCreateShell(PetscObjectComm((PetscObject)A),Am,An/b,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&P); ierr = MatShellSetOperation(P,MATOP_MULT,(void (*)(void))ApplyP);CHKERRQ(ierr); ierr = PCGalerkinSetInterpolation(t1,P);CHKERRQ(ierr); /* Create the second subproblem solver Block ILU */ PC t2; ierr = PCCompositeAddPC(pc,PCHYPRE);CHKERRQ(ierr); ierr = PCCompositeGetPC(pc,1,&t2);CHKERRQ(ierr); ierr = PCSetType(t2,PCHYPRE);CHKERRQ(ierr); ierr = PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_maxiter","1"); char s[100]; sprintf(s,"%e",.1); ierr = PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_tol",s);CHKERRQ(ierr); ierr = PCHYPRESetType(t2,"pilut");CHKERRQ(ierr); ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr); ierr = VecZeroEntries(v);CHKERRQ(ierr); ierr = KSPSolve(ksp,u,v);CHKERRQ(ierr); ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); } return 0; } --====-=-= Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable > On Jan 23, 2017, at 8:25 AM, S=C3=A9bastien Loisel <sloisel at gmail.com > wr= ote: >=20 > Hi Barry, >=20 > Thanks for your email. Thanks for pointing out my PetSC sillyness. I thin= k what happened is I played with matrices as well as with preconditioners s= o I initially implemented it as MATSHELL and at the end wrapped it in a PCS= HELL. :) >=20 > On Mon, Jan 23, 2017 at 2:26 AM, Barry Smith <bsmith at mcs.anl.gov > wrote: > I've started to look at this; it is a little weird using MATSHELL within = your PCSHELL, seems unnecessary. Not necessarily wrong, just strange. I'll = continue to try to understand it. >=20 > Barry >=20 >=20 >=20 > > On Jan 20, 2017, at 5:48 PM, S=C3=A9bastien Loisel <sloisel at gmail.com > = wrote: > > > > OK I'm attaching the prototype. > > > > It won't be 100% plug-and-play for you because in principle T1 and T2 a= re built on top of "sub-preconditioners" (AMG for T1 and BILU for T2) and j= udging from the PetSC architecture elsewhere I must assume you're going to = want to expose those in a rational way. At present, I've hard-coded some su= b-preconditioners, and in particular for T2 I had to resort to PILUT becaus= e I didn't have a BILU(0) handy. > > > > Also, I broke the PetSC law and my functions don't return integers, so = I also assume you're going to want to tweak that... Sorry! > > > > On Fri, Jan 20, 2017 at 11:29 PM, Barry Smith <bsmith at mcs.anl.gov > wrot= e: > > > > Sure, email the PCSHELL version, best case scenario I only need chan= ge out the PCSHELL and it takes me 5 minutes :-) > > > > > > > On Jan 20, 2017, at 5:07 PM, S=C3=A9bastien Loisel <sloisel at gmail.com = > wrote: > > > > > > Hi all, > > > > > > Thanks for your emails. I'm willing to help in whatever way. We have = a "PCSHELL" prototype we can provide on request, although PetSC experts can= no doubt do a better job than I did. > > > > > > Thanks, > > > > > > On Fri, Jan 20, 2017 at 9:50 PM, Robert Annewandter <robert.annewandt= er at opengosim.com > wrote: > > > Indeed that would be very helpful! We're very happy to support to tes= t things out, provide feedback etc > > > > > > Many thanks! > > > Robert > > > > > > > > > > > > On 20/01/17 21:22, Hammond, Glenn E wrote: > > >> That sounds great. I do know that there is also much interest state= -side in CPR preconditioning within PFLOTRAN. I have a Sandia colleague in= Carlsbad, NM who has been asking about it. I am sure that Sebastien and/o= r Robert will help out in any way possible. > > >> > > >> Thanks, > > >> > > >> Glenn > > >> > > >> > > >>> -----Original Message----- > > >>> From: Barry Smith [ > > >>> mailto:bsmith at mcs.anl.gov > > >>> ] > > >>> Sent: Friday, January 20, 2017 12:58 PM > > >>> To: Hammond, Glenn E > > >>> <gehammo at sandia.gov > > > >>> > > >>> Cc: S=C3=A9bastien Loisel > > >>> <sloisel at gmail.com > > > >>> ; Robert Annewandter > > >>> > > >>> <robert.annewandter at opengosim.com >; Jed Brown <jed at jedbrown.org > > > >>> ; > > >>> Paolo Orsini > > >>> <paolo.orsini at opengosim.com > > > >>> ; Matthew Knepley > > >>> > > >>> <knepley at gmail.com > > > >>> > > >>> Subject: Re: [EXTERNAL] CPR preconditioning > > >>> > > >>> > > >>> Glenn, > > >>> > > >>> Sorry about the delay in processing this, too much going on ... > > >>> > > >>> I think the best thing is for us (the PETSc developers) to impl= ement a CPR > > >>> preconditioner "directly" as its own PC and then have you guys try = it out. I am > > >>> planning to do this. > > >>> > > >>> Barry > > >>> > > >>> > > >>>> On Jan 20, 2017, at 2:50 PM, Hammond, Glenn E <gehammo at sandia.gov > > > >>> wrote: > > >>> > > >>>> Barry, Jed or Matt, > > >>>> > > >>>> Do you have any suggestions for how to address the limitations of > > >>>> > > >>> PetscFieldSplit() discussed below. Will they need to manipulate th= e matrices > > >>> manually? > > >>> > > >>>> Thanks, > > >>>> > > >>>> Glenn > > >>>> > > >>>> From: S=C3=A9bastien Loisel [ > > >>>> mailto:sloisel at gmail.com > > >>>> ] > > >>>> Sent: Wednesday, January 11, 2017 3:33 AM > > >>>> To: Barry Smith > > >>>> <bsmith at mcs.anl.gov > > > >>>> ; Robert Annewandter > > >>>> > > >>>> <robert.annewandter at opengosim.com > > > >>>> ; Hammond, Glenn E > > >>>> > > >>>> <gehammo at sandia.gov >; Jed Brown <jed at jedbrown.org > > > >>>> ; Paolo Orsini > > >>>> > > >>>> <paolo.orsini at opengosim.com > > > >>>> > > >>>> Subject: [EXTERNAL] CPR preconditioning > > >>>> > > >>>> Dear Friends, > > >>>> > > >>>> Paolo has asked me to write this email to clarify issues surroundi= ng > > >>>> the CPR preconditioner that is widely used in multiphase flow. I k= now > > >>>> Barry from a long time ago but we only met once when I was a PhD > > >>>> student so I would be shocked if he remembered me. :) > > >>>> > > >>>> I'm a math assistant professor and one of my areas of specializati= on is linear > > >>>> > > >>> algebra and preconditioning. > > >>> > > >>>> The main issue that is useful to clarify is the following. There w= as a proposal > > >>>> > > >>> to use PetSC's PETSCFIELDSPLIT in conjunction with PCCOMPOSITE in o= rder to > > >>> implement CPR preconditioning. Although this is morally the right i= dea, this > > >>> seems to be currently impossible because PETSCFIELDSPLIT lacks the > > >>> capabilities it would need to implement the T1 preconditioner. This= is due to > > >>> limitations in the API exposed by PETSCFIELDSPLIT (and no doubt lim= itations > > >>> in the underlying implementation). > > >>> > > >>>> In order to be as clear as possible, please allow me to describe > > >>>> > > >>> unambiguously the first of the two parts of the CPR preconditioner = using > > >>> some MATLAB snippets. Let I denote the N by N identity, and Z the N= by N > > >>> zero matrix. Put WT =3D [I I I] and C =3D [Z;Z;I]. The pressure mat= rix is Ap =3D > > >>> WT*A*C, and the T1 preconditioner is C*inv(Ap)*WT, where inv(Ap) is= to be > > >>> implemented with AMG. > > >>> > > >>>> This T1 preconditioner is the one that would have to be implemente= d by > > >>>> > > >>> PETSCFIELDSPLIT. The following limitations in PETSCFIELDSPLIT preve= nts one > > >>> to implement T1: > > >>> > > >>>> =E2=80=A2 One must select the "pressure" by specifying an IS ob= ject to > > >>>> > > >>> PCFieldSplitSetIS(). However, since WT =3D [I I I], the pressure is= obtained by > > >>> summing over the three fields. As far as I can tell, an IS object d= oes not allow > > >>> one to sum over several entries to obtain the pressure field. > > >>> > > >>>> =E2=80=A2 The pressure matrix is Ap =3D WT*A*C; note that the m= atrix WT on > > >>>> > > >>> the left is different from the matrix C on the right. However, PCFI= ELDSPLIT > > >>> has no notion of a "left-IS" vs "right-IS"; morally, only diagonal = blocks of A can > > >>> be used by PCFIELDSPLIT. > > >>> > > >>>> =E2=80=A2 PCFIELDSPLIT offers a range of hard-coded block struc= tures for the > > >>>> > > >>> final preconditioner, but the choice T1 =3D C*inv(Ap)*WT is not one= of these > > >>> choices. Indeed, this first stage CPR preconditioner T1 is *singula= r*, but there > > >>> is no obvious way for PCFIELDSPLIT to produce a singular preconditi= oner. > > >>> > > >>>> Note that the documentation for PETSCFIELDSPLIT says that "The > > >>>> > > >>> Constrained Pressure Preconditioner (CPR) does not appear to be cur= rently > > >>> implementable directly with PCFIELDSPLIT". Unless there are very si= gnificant > > >>> capabilities that are not documented, I don't see how CPR can be > > >>> implemented with PETSCFIELDSPLIT. > > >>> > > >>>> Elsewhere, someone proposed putting the two preconditioners T1 and= T2 > > >>>> on each side of A, e.g. T1*A*T2. That is a very bad idea because T= 1 is > > >>>> singular and hence T1*A*T2 is also singular. The correct CPR > > >>>> preconditioner is nonsingular despite the fact that T1 is singular, > > >>>> and MCPR is given by the formula MCPR =3D T2*(I-A*T1)+T1, where T2= =3D > > >>>> BILU(0) of A. (There is a proof, due to Felix Kwok, that BILU(0) w= orks > > >>>> even though ILU(0) craps out on vanishing diagonal entries.) > > >>>> > > >>>> I'm also attaching a sample MATLAB code that runs the CPR precondi= tioner > > >>>> > > >>> on some fabricated random matrix A. I emphasize that this is not a = realistic > > >>> matrix, but it still illustrates that the algorithm works, and that= MCPR is better > > >>> than T2 alone. Note that T1 cannot be used alone since it is singul= ar. Further > > >>> gains are expected when the Robert's realistic code with correct ph= ysics will > > >>> come online. > > >>> > > >>>> <image003.jpg> > > >>>> I hope this clarifies some things. > > >>>> > > >>>> > > >>>> S=C3=A9bastien Loisel > > >>>> Assistant Professor > > >>>> Department of Mathematics, Heriot-Watt University Riccarton, EH14 = 4AS, > > >>>> United Kingdom > > >>>> web: > > >>>> http://www.ma.hw.ac.uk/~loisel/ > > >>>> > > >>>> email: S.Loisel at > > >>>> hw.ac.uk > > >>>> > > >>>> phone: > > >>>> +44 131 451 3234 > > >>>> > > >>>> fax: > > >>>> +44 131 451 3249 > > > > > > > > > > > > > > > -- > > > S=C3=A9bastien Loisel > > > Assistant Professor > > > Department of Mathematics, Heriot-Watt University > > > Riccarton, EH14 4AS, United Kingdom > > > web: http://www.ma.hw.ac.uk/~loisel/ > > > email: S.Loisel at hw.ac.uk > > > phone: +44 131 451 3234 > > > fax: +44 131 451 3249 > > > > > > > > > > > > > -- > > S=C3=A9bastien Loisel > > Assistant Professor > > Department of Mathematics, Heriot-Watt University > > Riccarton, EH14 4AS, United Kingdom > > web: http://www.ma.hw.ac.uk/~loisel/ > > email: S.Loisel at hw.ac.uk > > phone: +44 131 451 3234 > > fax: +44 131 451 3249 > > > > <petsc.zip> >=20 >=20 >=20 >=20 > --=20 > S=C3=A9bastien Loisel > Assistant Professor > Department of Mathematics, Heriot-Watt University > Riccarton, EH14 4AS, United Kingdom > web: http://www.ma.hw.ac.uk/~loisel/ > email: S.Loisel at hw.ac.uk > phone: +44 131 451 3234 > fax: +44 131 451 3249 >=20 --====-=-=-- ] -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jun 25 13:25:33 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 25 Jun 2023 14:25:33 -0400 Subject: [petsc-users] Questions on CPR preconditioner In-Reply-To: References: Message-ID: <8BCAAD32-9D39-4A54-82D2-FBECC1D83918@petsc.dev> It is pasted below > On Jun 25, 2023, at 1:40 PM, Edoardo alinovi wrote: > > Hi Barry, > > thanks for pointing me out to that discussion! Unfortunately I am getting issues with this link: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190302/b0c1ad29/attachment.mht , any chance it is a dead one? > > Cheers --====-=-= Content-Type: text/plain Content-Disposition: inline Ok, I have implemented the algorithm using PETSc PCCOMPOSITE and PCGALERKIN and get identical iterations as the code you sent. PCFIELDSPLIT is not intended for this type of solver composition. Here is the algorithm written in "two-step" form x_1/2 = P KSPSolve( R A P, using BoomerAMG) R b x = x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A x_1/2) PCCOMPOSITE with a type of multiplicative handles the two steps and PCGALERKIN handles the P KSPSolve(R A P) R business. You will need to use the master version of PETSc because I had to add a feature to PCGALERKIN to allow the solver to be easily used for and A that changes values for later solvers. Here is the output from -ksp_view KSP Object: 1 MPI processes type: fgmres GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: composite Composite PC type - MULTIPLICATIVE PCs on composite preconditioner follow --------------------------------- PC Object: (sub_0_) 1 MPI processes type: galerkin Galerkin PC KSP on Galerkin follow --------------------------------- KSP Object: (sub_0_galerkin_) 1 MPI processes type: richardson Richardson: damping factor=1. maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (sub_0_galerkin_) 1 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0. HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1. HYPRE BoomerAMG: Outer relax weight (all) 1. HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Not using more complex smoothers. HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=50, cols=50 total: nonzeros=244, allocated nonzeros=244 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=100, cols=100, bs=2 total: nonzeros=976, allocated nonzeros=100000 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 50 nodes, limit used is 5 PC Object: (sub_1_) 1 MPI processes type: hypre HYPRE Pilut preconditioning HYPRE Pilut: maximum number of iterations 1 HYPRE Pilut: drop tolerance 0.1 HYPRE Pilut: default factor row size linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=100, cols=100, bs=2 total: nonzeros=976, allocated nonzeros=100000 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 50 nodes, limit used is 5 Note that one can change the solvers on the two "stages" and all their options such as tolerances etc using the options database a proper prefixes which you can find in the output above. For example to use PETSc's ILU instead of hypre's just run with -sub_1_pc_type ilu or to use a direct solver instead of boomer -sub_0_galerkin_pc_type lu I've attached a new version of testmain2.c that runs your solver and also my version. Given the "unique" nature of your R = [ I I ... I] and P = [0 I 0 0 ... 0] I am not sure that it makes sense to include this preconditioner directly in PETSc as a new PC type; so if you are serious about using it you can take what I am sending back and modify it for your needs. As a numerical analyst who works on linear solvers I am also not convinced that this is likely to be a particular good preconditioner Let me know if you have any questions, Barry --====-=-= Content-Type: text/plain; name=testmain2.c Content-Disposition: attachment; filename=testmain2.c // make && mpirun -np 2 ./testmain2 -ksp_error_if_not_converged #include "MCPR.h" /* Computes the submatrix associated with the Galerkin subproblem Ap = R A P */ PetscErrorCode ComputeSubmatrix(PC pc,Mat A, Mat Ap, Mat *cAp,void *ctx) { PetscErrorCode ierr; PetscInt b,Am,An,start,end,first = 0, offset = 1; IS is,js; Mat Aij; PetscFunctionBegin; ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr); ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr); ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr); ierr = ISCreateStride(PetscObjectComm((PetscObject)A),Am/b,start+offset,b,&js);CHKERRQ(ierr); if (!Ap) { ierr = ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+0,b,&is);CHKERRQ(ierr); ierr = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Ap);CHKERRQ(ierr); ierr = ISDestroy(&is);CHKERRQ(ierr); *cAp = Ap; first = 1; } else { ierr = MatZeroEntries(Ap);CHKERRQ(ierr); } for(PetscInt k=first;k<b;++k) { ierr = ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+k,b,&is);CHKERRQ(ierr); ierr = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Aij);CHKERRQ(ierr); ierr = MatAXPY(Ap,1.0,Aij,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); ierr = MatDestroy(&Aij);CHKERRQ(ierr); ierr = ISDestroy(&is);CHKERRQ(ierr); } ierr = ISDestroy(&js);CHKERRQ(ierr); PetscFunctionReturn(0); } /* Apply the restriction operator for the Galkerin problem */ PetscErrorCode ApplyR(Mat A, Vec x,Vec y) { PetscErrorCode ierr; PetscInt b; PetscFunctionBegin; ierr = VecGetBlockSize(x,&b);CHKERRQ(ierr); ierr = VecStrideGather(x,0,y,INSERT_VALUES);CHKERRQ(ierr); for (PetscInt k=1;k<b;++k) {ierr = VecStrideGather(x,k,y,ADD_VALUES);CHKERRQ(ierr);} PetscFunctionReturn(0); } /* Apply the interpolation operator for the Galerkin problem */ PetscErrorCode ApplyP(Mat A, Vec x,Vec y) { PetscErrorCode ierr; PetscInt offset = 1; PetscFunctionBegin; ierr = VecStrideScatter(x,offset,y,INSERT_VALUES);CHKERRQ(ierr); PetscFunctionReturn(0); } int main( int argc, char *argv[] ) { PetscInitialize(&argc,&argv,PETSC_NULL,PETSC_NULL); int rank, size; MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process id */ MPI_Comm_size (MPI_COMM_WORLD, &size); /* get number of processes */ MPI_Comm C = PETSC_COMM_WORLD; PetscRandom rnd; PetscRandomCreate(C,&rnd); PetscRandomSetInterval(rnd,0.0,1.0); PetscRandomSetFromOptions(rnd); int M = 100; int N = size*M; Mat A = getSampleMatrix(M); Mat T1 = getT1(A,1); Mat T2 = getT2(A,0.1); Mat MCPR = getMCPR(A,T1,T2); Vec u,v,w,z; VecCreate(C,&u); VecSetBlockSize(u,2); VecSetSizes(u,M,N); VecSetFromOptions(u); VecDuplicate(u,&v); VecDuplicate(u,&w); VecDuplicate(u,&z); VecSetRandom(u,rnd); Mat mats[] = {T2,MCPR}; const char *names[] = {"T2","MCPR"}; for(int k=1;k<2;++k) { KSP solver; KSPCreate(C,&solver); KSPSetOperators(solver,A,A); KSPSetType(solver,KSPFGMRES); KSPGMRESSetRestart(solver,N); PC pc; KSPGetPC(solver,&pc); putMatrixInPC(pc,mats[k]); KSPSetFromOptions(solver); KSPSolve(solver,u,v); KSPConvergedReason reason; int its; KSPGetConvergedReason(solver,&reason); KSPGetIterationNumber(solver,&its); PetscPrintf(PETSC_COMM_WORLD,"testmain2: %s converged reason %d; iterations %d.\n",names[k],reason,its); KSPView(solver,PETSC_VIEWER_STDOUT_WORLD); KSPDestroy(&solver); } /* The preconditioner is x_1/2 = P KSPSolve( R A P, using BoomerAMG) R b x = x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A x_1/2) where the first line is implemented using PCGALERKIN with BoomerAMG on the subproblem so can be written as x_1/2 = PCApply(A, using PCGALERKIN with KSPSolve( R A P, using BoomerAMG) as the inner solver x = x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A x_1/2) Which is implemented using the PETSc PCCOMPOSITE preconditioner of type multiplicative so can be written as x = PCApply(A, using PCCOMPOSITE using (PCGALERKIN with KSPSolve( R A P, using BoomerAMG) as the inner solver) as the first solver and PCApply( A, using Hypre PILUT preconditioner) as the second solver) */ { PetscErrorCode ierr; KSP ksp; ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr); ierr = KSPSetType(ksp,KSPFGMRES);CHKERRQ(ierr); ierr = KSPGMRESSetRestart(ksp,100);CHKERRQ(ierr); ierr = KSPSetOperators(ksp,A,A);CHKERRQ(ierr); PC pc; ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr); ierr = PCSetType(pc,PCCOMPOSITE);CHKERRQ(ierr); ierr = PCCompositeSetType(pc,PC_COMPOSITE_MULTIPLICATIVE);CHKERRQ(ierr); /* Create first sub problem solver Hypre boomerAMG on Ap */ PC t1; ierr = PCCompositeAddPC(pc,PCGALERKIN);CHKERRQ(ierr); ierr = PCCompositeGetPC(pc,0,&t1);CHKERRQ(ierr); KSP Ap_ksp; ierr = PCGalerkinGetKSP(t1,&Ap_ksp);CHKERRQ(ierr); ierr = KSPSetType(Ap_ksp,KSPRICHARDSON);CHKERRQ(ierr); PC Ap_pc; ierr = KSPGetPC(Ap_ksp,&Ap_pc);CHKERRQ(ierr); ierr = PCSetType(Ap_pc,PCHYPRE);CHKERRQ(ierr); /* this tells the PC how to compute the reduced matrix */ ierr = PCGalerkinSetComputeSubmatrix(t1,ComputeSubmatrix,NULL);CHKERRQ(ierr); PetscInt b,Am,An; ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr); ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr); int start,end; ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr); /* create the R operator */ Mat R; ierr = MatCreateShell(PetscObjectComm((PetscObject)A),Am/b,An,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&R); ierr = MatShellSetOperation(R,MATOP_MULT,(void (*)(void))ApplyR);CHKERRQ(ierr); ierr = PCGalerkinSetRestriction(t1,R);CHKERRQ(ierr); /* create the P operator */ Mat P; ierr = MatCreateShell(PetscObjectComm((PetscObject)A),Am,An/b,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&P); ierr = MatShellSetOperation(P,MATOP_MULT,(void (*)(void))ApplyP);CHKERRQ(ierr); ierr = PCGalerkinSetInterpolation(t1,P);CHKERRQ(ierr); /* Create the second subproblem solver Block ILU */ PC t2; ierr = PCCompositeAddPC(pc,PCHYPRE);CHKERRQ(ierr); ierr = PCCompositeGetPC(pc,1,&t2);CHKERRQ(ierr); ierr = PCSetType(t2,PCHYPRE);CHKERRQ(ierr); ierr = PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_maxiter","1"); char s[100]; sprintf(s,"%e",.1); ierr = PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_tol",s);CHKERRQ(ierr); ierr = PCHYPRESetType(t2,"pilut");CHKERRQ(ierr); ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr); ierr = VecZeroEntries(v);CHKERRQ(ierr); ierr = KSPSolve(ksp,u,v);CHKERRQ(ierr); ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); } return 0; } --====-=-= Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable > On Jan 23, 2017, at 8:25 AM, S=C3=A9bastien Loisel <sloisel at gmail.com > wr= ote: >=20 > Hi Barry, >=20 > Thanks for your email. Thanks for pointing out my PetSC sillyness. I thin= k what happened is I played with matrices as well as with preconditioners s= o I initially implemented it as MATSHELL and at the end wrapped it in a PCS= HELL. :) >=20 > On Mon, Jan 23, 2017 at 2:26 AM, Barry Smith <bsmith at mcs.anl.gov > wrote: > I've started to look at this; it is a little weird using MATSHELL within = your PCSHELL, seems unnecessary. Not necessarily wrong, just strange. I'll = continue to try to understand it. >=20 > Barry >=20 >=20 >=20 > > On Jan 20, 2017, at 5:48 PM, S=C3=A9bastien Loisel <sloisel at gmail.com > = wrote: > > > > OK I'm attaching the prototype. > > > > It won't be 100% plug-and-play for you because in principle T1 and T2 a= re built on top of "sub-preconditioners" (AMG for T1 and BILU for T2) and j= udging from the PetSC architecture elsewhere I must assume you're going to = want to expose those in a rational way. At present, I've hard-coded some su= b-preconditioners, and in particular for T2 I had to resort to PILUT becaus= e I didn't have a BILU(0) handy. > > > > Also, I broke the PetSC law and my functions don't return integers, so = I also assume you're going to want to tweak that... Sorry! > > > > On Fri, Jan 20, 2017 at 11:29 PM, Barry Smith <bsmith at mcs.anl.gov > wrot= e: > > > > Sure, email the PCSHELL version, best case scenario I only need chan= ge out the PCSHELL and it takes me 5 minutes :-) > > > > > > > On Jan 20, 2017, at 5:07 PM, S=C3=A9bastien Loisel <sloisel at gmail.com = > wrote: > > > > > > Hi all, > > > > > > Thanks for your emails. I'm willing to help in whatever way. We have = a "PCSHELL" prototype we can provide on request, although PetSC experts can= no doubt do a better job than I did. > > > > > > Thanks, > > > > > > On Fri, Jan 20, 2017 at 9:50 PM, Robert Annewandter <robert.annewandt= er at opengosim.com > wrote: > > > Indeed that would be very helpful! We're very happy to support to tes= t things out, provide feedback etc > > > > > > Many thanks! > > > Robert > > > > > > > > > > > > On 20/01/17 21:22, Hammond, Glenn E wrote: > > >> That sounds great. I do know that there is also much interest state= -side in CPR preconditioning within PFLOTRAN. I have a Sandia colleague in= Carlsbad, NM who has been asking about it. I am sure that Sebastien and/o= r Robert will help out in any way possible. > > >> > > >> Thanks, > > >> > > >> Glenn > > >> > > >> > > >>> -----Original Message----- > > >>> From: Barry Smith [ > > >>> mailto:bsmith at mcs.anl.gov > > >>> ] > > >>> Sent: Friday, January 20, 2017 12:58 PM > > >>> To: Hammond, Glenn E > > >>> <gehammo at sandia.gov > > > >>> > > >>> Cc: S=C3=A9bastien Loisel > > >>> <sloisel at gmail.com > > > >>> ; Robert Annewandter > > >>> > > >>> <robert.annewandter at opengosim.com >; Jed Brown <jed at jedbrown.org > > > >>> ; > > >>> Paolo Orsini > > >>> <paolo.orsini at opengosim.com > > > >>> ; Matthew Knepley > > >>> > > >>> <knepley at gmail.com > > > >>> > > >>> Subject: Re: [EXTERNAL] CPR preconditioning > > >>> > > >>> > > >>> Glenn, > > >>> > > >>> Sorry about the delay in processing this, too much going on ... > > >>> > > >>> I think the best thing is for us (the PETSc developers) to impl= ement a CPR > > >>> preconditioner "directly" as its own PC and then have you guys try = it out. I am > > >>> planning to do this. > > >>> > > >>> Barry > > >>> > > >>> > > >>>> On Jan 20, 2017, at 2:50 PM, Hammond, Glenn E <gehammo at sandia.gov > > > >>> wrote: > > >>> > > >>>> Barry, Jed or Matt, > > >>>> > > >>>> Do you have any suggestions for how to address the limitations of > > >>>> > > >>> PetscFieldSplit() discussed below. Will they need to manipulate th= e matrices > > >>> manually? > > >>> > > >>>> Thanks, > > >>>> > > >>>> Glenn > > >>>> > > >>>> From: S=C3=A9bastien Loisel [ > > >>>> mailto:sloisel at gmail.com > > >>>> ] > > >>>> Sent: Wednesday, January 11, 2017 3:33 AM > > >>>> To: Barry Smith > > >>>> <bsmith at mcs.anl.gov > > > >>>> ; Robert Annewandter > > >>>> > > >>>> <robert.annewandter at opengosim.com > > > >>>> ; Hammond, Glenn E > > >>>> > > >>>> <gehammo at sandia.gov >; Jed Brown <jed at jedbrown.org > > > >>>> ; Paolo Orsini > > >>>> > > >>>> <paolo.orsini at opengosim.com > > > >>>> > > >>>> Subject: [EXTERNAL] CPR preconditioning > > >>>> > > >>>> Dear Friends, > > >>>> > > >>>> Paolo has asked me to write this email to clarify issues surroundi= ng > > >>>> the CPR preconditioner that is widely used in multiphase flow. I k= now > > >>>> Barry from a long time ago but we only met once when I was a PhD > > >>>> student so I would be shocked if he remembered me. :) > > >>>> > > >>>> I'm a math assistant professor and one of my areas of specializati= on is linear > > >>>> > > >>> algebra and preconditioning. > > >>> > > >>>> The main issue that is useful to clarify is the following. There w= as a proposal > > >>>> > > >>> to use PetSC's PETSCFIELDSPLIT in conjunction with PCCOMPOSITE in o= rder to > > >>> implement CPR preconditioning. Although this is morally the right i= dea, this > > >>> seems to be currently impossible because PETSCFIELDSPLIT lacks the > > >>> capabilities it would need to implement the T1 preconditioner. This= is due to > > >>> limitations in the API exposed by PETSCFIELDSPLIT (and no doubt lim= itations > > >>> in the underlying implementation). > > >>> > > >>>> In order to be as clear as possible, please allow me to describe > > >>>> > > >>> unambiguously the first of the two parts of the CPR preconditioner = using > > >>> some MATLAB snippets. Let I denote the N by N identity, and Z the N= by N > > >>> zero matrix. Put WT =3D [I I I] and C =3D [Z;Z;I]. The pressure mat= rix is Ap =3D > > >>> WT*A*C, and the T1 preconditioner is C*inv(Ap)*WT, where inv(Ap) is= to be > > >>> implemented with AMG. > > >>> > > >>>> This T1 preconditioner is the one that would have to be implemente= d by > > >>>> > > >>> PETSCFIELDSPLIT. The following limitations in PETSCFIELDSPLIT preve= nts one > > >>> to implement T1: > > >>> > > >>>> =E2=80=A2 One must select the "pressure" by specifying an IS ob= ject to > > >>>> > > >>> PCFieldSplitSetIS(). However, since WT =3D [I I I], the pressure is= obtained by > > >>> summing over the three fields. As far as I can tell, an IS object d= oes not allow > > >>> one to sum over several entries to obtain the pressure field. > > >>> > > >>>> =E2=80=A2 The pressure matrix is Ap =3D WT*A*C; note that the m= atrix WT on > > >>>> > > >>> the left is different from the matrix C on the right. However, PCFI= ELDSPLIT > > >>> has no notion of a "left-IS" vs "right-IS"; morally, only diagonal = blocks of A can > > >>> be used by PCFIELDSPLIT. > > >>> > > >>>> =E2=80=A2 PCFIELDSPLIT offers a range of hard-coded block struc= tures for the > > >>>> > > >>> final preconditioner, but the choice T1 =3D C*inv(Ap)*WT is not one= of these > > >>> choices. Indeed, this first stage CPR preconditioner T1 is *singula= r*, but there > > >>> is no obvious way for PCFIELDSPLIT to produce a singular preconditi= oner. > > >>> > > >>>> Note that the documentation for PETSCFIELDSPLIT says that "The > > >>>> > > >>> Constrained Pressure Preconditioner (CPR) does not appear to be cur= rently > > >>> implementable directly with PCFIELDSPLIT". Unless there are very si= gnificant > > >>> capabilities that are not documented, I don't see how CPR can be > > >>> implemented with PETSCFIELDSPLIT. > > >>> > > >>>> Elsewhere, someone proposed putting the two preconditioners T1 and= T2 > > >>>> on each side of A, e.g. T1*A*T2. That is a very bad idea because T= 1 is > > >>>> singular and hence T1*A*T2 is also singular. The correct CPR > > >>>> preconditioner is nonsingular despite the fact that T1 is singular, > > >>>> and MCPR is given by the formula MCPR =3D T2*(I-A*T1)+T1, where T2= =3D > > >>>> BILU(0) of A. (There is a proof, due to Felix Kwok, that BILU(0) w= orks > > >>>> even though ILU(0) craps out on vanishing diagonal entries.) > > >>>> > > >>>> I'm also attaching a sample MATLAB code that runs the CPR precondi= tioner > > >>>> > > >>> on some fabricated random matrix A. I emphasize that this is not a = realistic > > >>> matrix, but it still illustrates that the algorithm works, and that= MCPR is better > > >>> than T2 alone. Note that T1 cannot be used alone since it is singul= ar. Further > > >>> gains are expected when the Robert's realistic code with correct ph= ysics will > > >>> come online. > > >>> > > >>>> <image003.jpg> > > >>>> I hope this clarifies some things. > > >>>> > > >>>> > > >>>> S=C3=A9bastien Loisel > > >>>> Assistant Professor > > >>>> Department of Mathematics, Heriot-Watt University Riccarton, EH14 = 4AS, > > >>>> United Kingdom > > >>>> web: > > >>>> http://www.ma.hw.ac.uk/~loisel/ > > >>>> > > >>>> email: S.Loisel at > > >>>> hw.ac.uk > > >>>> > > >>>> phone: > > >>>> +44 131 451 3234 > > >>>> > > >>>> fax: > > >>>> +44 131 451 3249 > > > > > > > > > > > > > > > -- > > > S=C3=A9bastien Loisel > > > Assistant Professor > > > Department of Mathematics, Heriot-Watt University > > > Riccarton, EH14 4AS, United Kingdom > > > web: http://www.ma.hw.ac.uk/~loisel/ > > > email: S.Loisel at hw.ac.uk > > > phone: +44 131 451 3234 > > > fax: +44 131 451 3249 > > > > > > > > > > > > > -- > > S=C3=A9bastien Loisel > > Assistant Professor > > Department of Mathematics, Heriot-Watt University > > Riccarton, EH14 4AS, United Kingdom > > web: http://www.ma.hw.ac.uk/~loisel/ > > email: S.Loisel at hw.ac.uk > > phone: +44 131 451 3234 > > fax: +44 131 451 3249 > > > > <petsc.zip> >=20 >=20 >=20 >=20 > --=20 > S=C3=A9bastien Loisel > Assistant Professor > Department of Mathematics, Heriot-Watt University > Riccarton, EH14 4AS, United Kingdom > web: http://www.ma.hw.ac.uk/~loisel/ > email: S.Loisel at hw.ac.uk > phone: +44 131 451 3234 > fax: +44 131 451 3249 >=20 --====-=-=-- ] -------------- next part -------------- An HTML attachment was scrubbed... URL: From fengshw3 at mail2.sysu.edu.cn Sun Jun 25 21:43:08 2023 From: fengshw3 at mail2.sysu.edu.cn (=?utf-8?B?5Yav5LiK546u?=) Date: Mon, 26 Jun 2023 10:43:08 +0800 Subject: [petsc-users] Error while checking with ex19 after intallation Message-ID: Hi, Recently, I finally installed PETSc with Cygwin and obtained library files. However, the test of Ex19 was failed, both with 1 MPI and 2 MPI. The MPI used is MSMPI. The detailed message printed is $ make PETSC_DIR=/cygdrive/d/mypetsc PETSC_ARCH="" check Running check examples to verify correct installation Using PETSC_DIR=/cygdrive/d/mypetsc and PETSC_ARCH= Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process See https://petsc.org/release/faq/ job aborted: [ranks] message [0] process exited without calling finalize ---- error analysis ----- [0] on LAPTOP-4FSVP96B ./ex19 ended prematurely and may have crashed. exit code 0xc00000fd ---- error analysis ----- Possible error running C/C++ src/snes/tutorials/ex19 with 2 MPI processes See https://petsc.org/release/faq/ job aborted: [ranks] message [0] process exited without calling finalize [1] terminated ---- error analysis ----- [0] on LAPTOP-4FSVP96B ./ex19 ended prematurely and may have crashed. exit code 0xc00000fd ---- error analysis ----- Completed test examples Could you give any suggestions? Btw, my configuration command is ./configure --prefix=/cygdrive/d/mypetsc --with-cc='win32fe cl' --with-fc=0 --with-cxx=0 --download-f2cblaslapack --with-shared-libraries=0 --with-mpi-include='[/cygdrive/d/MicrosoftSDKs/MPI/Include,/cygdrive/d/MicrosoftSDKs/MPI/Include/x64]' --with-mpi-lib='[/cygdrive/d/MicrosoftSDKs/MPI/Lib/x64/msmpi.lib,/cygdrive/d/MicrosoftSDKs/MPI/Lib/x64/msmpifec.lib]' --with-mpiexec=/cygdrive/d/MicrosoftMPI/Bin/mpiexec Waiting for your reply and thanks very much, FENG -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jun 25 22:03:03 2023 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 25 Jun 2023 23:03:03 -0400 Subject: [petsc-users] Error while checking with ex19 after intallation In-Reply-To: References: Message-ID: <80A9D98F-E08C-46CB-9670-C4D04676D9A0@petsc.dev> Googling these messages find other people who have received similar messages while working Microsoft Windows and MPI but nothing particularly helpful I could find. I am guessing an incompatibility between the Microsoft MPI and cl compiler you are using. I suggest installing the very latest of these both and see if the problem persists. Barry > On Jun 25, 2023, at 10:43 PM, ??? wrote: > > Hi, > Recently, I finally installed PETSc with Cygwin and obtained library files. However, the test of Ex19 was failed, both with 1 MPI and 2 MPI. The MPI used is MSMPI. The detailed message printed is > > $ make PETSC_DIR=/cygdrive/d/mypetsc PETSC_ARCH="" check > Running check examples to verify correct installation > Using PETSC_DIR=/cygdrive/d/mypetsc and PETSC_ARCH= > Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process > See https://petsc.org/release/faq/ > > job aborted: > [ranks] message > > [0] process exited without calling finalize > > ---- error analysis ----- > > [0] on LAPTOP-4FSVP96B > ./ex19 ended prematurely and may have crashed. exit code 0xc00000fd > > ---- error analysis ----- > Possible error running C/C++ src/snes/tutorials/ex19 with 2 MPI processes > See https://petsc.org/release/faq/ > > job aborted: > [ranks] message > > [0] process exited without calling finalize > > [1] terminated > > ---- error analysis ----- > > [0] on LAPTOP-4FSVP96B > ./ex19 ended prematurely and may have crashed. exit code 0xc00000fd > > ---- error analysis ----- > Completed test examples > > Could you give any suggestions? > > Btw, my configuration command is > > ./configure --prefix=/cygdrive/d/mypetsc --with-cc='win32fe cl' --with-fc=0 --with-cxx=0 --download-f2cblaslapack --with-shared-libraries=0 --with-mpi-include='[/cygdrive/d/MicrosoftSDKs/MPI/Include,/cygdrive/d/MicrosoftSDKs/MPI/Include/x64]' --with-mpi-lib='[/cygdrive/d/MicrosoftSDKs/MPI/Lib/x64/msmpi.lib,/cygdrive/d/MicrosoftSDKs/MPI/Lib/x64/msmpifec.lib]' --with-mpiexec=/cygdrive/d/MicrosoftMPI/Bin/mpiexec > > Waiting for your reply and thanks very much, > FENG From marcos.vanella at nist.gov Mon Jun 26 10:34:41 2023 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Mon, 26 Jun 2023 15:34:41 +0000 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution Message-ID: Hi, I was wondering if anyone has experience on what combinations are more efficient to solve a Poisson problem derived from a 7 point stencil on a single mesh (serial). I've been doing some tests of multigrid and cholesky on a 50^3 mesh. -pc_type mg takes about 75% more time than -pc_type cholesky -pc_factor_mat_solver_type cholmod for the case I'm testing. I'm new to PETSc so any suggestions are most welcome and appreciated, Marcos -------------- next part -------------- An HTML attachment was scrubbed... URL: From srcs at mpcdf.mpg.de Mon Jun 26 10:44:37 2023 From: srcs at mpcdf.mpg.de (Srikanth Sathyanarayana) Date: Mon, 26 Jun 2023 17:44:37 +0200 Subject: [petsc-users] Using DMDA for a block-structured grid approach Message-ID: <9f8889fc-e904-a67d-daf1-7b29acf052d9@mpcdf.mpg.de> Dear PETSc developers, I am currently working on a Gyrokinetic code where I essentially have to implement a block structured grid approach in one of the subdomains of the phase space coordinates. I have attached one such example in the x - v_parallel subdomains where I go from a full grid to a grid based on 4 blocks (divided along x direction) which is still Cartesian but misaligned across blocks (the grid is a very coarse representation). So the idea is to basically create a library for the existing solver and try to implement the block structured grid approach which mainly involves some sort of interpolation between the blocks to align the points. I came up with an idea to implement this using DMDA. I looked into the old threads where you have suggested using DMComposite in order to tackle such problems although a clear path for the interpolation between the DM's was not clarified. Nonetheless, my main questions were: 1. Do you still suggest using DMComposite to approach this problem. 2. Is there a way to use DMDA where the user provides the allocation? My main problem is that I am not allowed to change the solvers data structure 3. I looked into VecCreateMPIWithArray for the user provided allocation, however I am not very sure if this Vector can be used with the DMDA operations. Overall, I request you to please let me know what you think of this approach (using DMDA) and I would be grateful if you could suggest me any alternatives. Thanks and regards, Srikanth -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2023-06-26 17-24-32.png Type: image/png Size: 39427 bytes Desc: not available URL: From knepley at gmail.com Mon Jun 26 11:01:14 2023 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 26 Jun 2023 12:01:14 -0400 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: On Mon, Jun 26, 2023 at 11:34?AM Vanella, Marcos (Fed) via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi, I was wondering if anyone has experience on what combinations are more > efficient to solve a Poisson problem derived from a 7 point stencil on a > single mesh (serial). > I've been doing some tests of multigrid and cholesky on a 50^3 mesh. *-pc_type > mg* takes about 75% more time than *-pc_type cholesky > -pc_factor_mat_solver_type cholmod* for the case I'm testing. > I'm new to PETSc so any suggestions are most welcome and appreciated, > Hmm, that does not match my experience. We can help look at it if you send the output of -log_view -ksp_view -ksp_monitor_true_residual for both cases. I would expect MG to start being faster above maybe 25-50K unknowns. Thanks, Matt > Marcos > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jun 26 11:05:07 2023 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 26 Jun 2023 12:05:07 -0400 Subject: [petsc-users] Using DMDA for a block-structured grid approach In-Reply-To: <9f8889fc-e904-a67d-daf1-7b29acf052d9@mpcdf.mpg.de> References: <9f8889fc-e904-a67d-daf1-7b29acf052d9@mpcdf.mpg.de> Message-ID: On Mon, Jun 26, 2023 at 11:44?AM Srikanth Sathyanarayana wrote: > Dear PETSc developers, > > > I am currently working on a Gyrokinetic code where I essentially have to > implement a block structured grid approach in one of the subdomains of > the phase space coordinates. I have attached one such example in the x - > v_parallel subdomains where I go from a full grid to a grid based on 4 > blocks (divided along x direction) which is still Cartesian but > misaligned across blocks (the grid is a very coarse representation). So > the idea is to basically create a library for the existing solver and > try to implement the block structured grid approach which mainly > involves some sort of interpolation between the blocks to align the points. > > > I came up with an idea to implement this using DMDA. I looked into the > old threads where you have suggested using DMComposite in order to > tackle such problems although a clear path for the interpolation between > the DM's was not clarified. Nonetheless, my main questions were: > > 1. Do you still suggest using DMComposite to approach this problem. > Maybe > 2. Is there a way to use DMDA where the user provides the allocation? My > main problem is that I am not allowed to change the solvers data structure > I do not understand this question. > 3. I looked into VecCreateMPIWithArray for the user provided allocation, > however I am not very sure if this Vector can be used with the DMDA > operations. > It is unlikely. > Overall, I request you to please let me know what you think of this > approach (using DMDA) and I would be grateful if you could suggest me > any alternatives. > Can you give a short argument for your approach? For example, why would I want to use a multi-block approach instead of just using a single block? To save on storage? On computing? How much will you save? Why would I not want an unstructured grid covering the same area? Why would I not use structured-adaptive refinement (octree)? Thanks, Matt > Thanks and regards, > > Srikanth > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Jun 26 11:05:43 2023 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 26 Jun 2023 12:05:43 -0400 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: I'm not sure what MG is doing with an "unstructured" problem. I assume you are not using DMDA. -pc_type gamg should work I would configure with hypre and try that also: -pc_type hypre As Matt said MG should be faster. How many iterations was it taking? Try a 100^3 and check that the iteration count does not change much, if at all. Mark On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi, I was wondering if anyone has experience on what combinations are more > efficient to solve a Poisson problem derived from a 7 point stencil on a > single mesh (serial). > I've been doing some tests of multigrid and cholesky on a 50^3 mesh. *-pc_type > mg* takes about 75% more time than *-pc_type cholesky > -pc_factor_mat_solver_type cholmod* for the case I'm testing. > I'm new to PETSc so any suggestions are most welcome and appreciated, > Marcos > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jun 26 11:07:37 2023 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 26 Jun 2023 12:07:37 -0400 Subject: [petsc-users] Using DMDA for a block-structured grid approach In-Reply-To: <9f8889fc-e904-a67d-daf1-7b29acf052d9@mpcdf.mpg.de> References: <9f8889fc-e904-a67d-daf1-7b29acf052d9@mpcdf.mpg.de> Message-ID: <29E83805-35D1-46B5-8812-27F6078F7E73@petsc.dev> > On Jun 26, 2023, at 11:44 AM, Srikanth Sathyanarayana wrote: > > Dear PETSc developers, > > > I am currently working on a Gyrokinetic code where I essentially have to implement a block structured grid approach in one of the subdomains of the phase space coordinates. I have attached one such example in the x - v_parallel subdomains where I go from a full grid to a grid based on 4 blocks (divided along x direction) which is still Cartesian but misaligned across blocks (the grid is a very coarse representation). So the idea is to basically create a library for the existing solver and try to implement the block structured grid approach which mainly involves some sort of interpolation between the blocks to align the points. > > > I came up with an idea to implement this using DMDA. I looked into the old threads where you have suggested using DMComposite in order to tackle such problems although a clear path for the interpolation between the DM's was not clarified. Nonetheless, my main questions were: > > 1. Do you still suggest using DMComposite to approach this problem. Unfortunately, that is all we have for combining DM's. You can use unstructured, or structured or unstructed with quad-tree-type refinement but we don't have a " canned" approach for combining a bunch of structured grids together efficiently and cleanly (lots of issues come up in trying to design such a thing in a distributed memory environment since some blocks may need to live on different number of MPI ranks) > > 2. Is there a way to use DMDA where the user provides the allocation? My main problem is that I am not allowed to change the solvers data structure The allocation for what? > > 3. I looked into VecCreateMPIWithArray for the user provided allocation, however I am not very sure if this Vector can be used with the DMDA operations. Yes, you can use these variants to create vectors that you use with DMDA; so long as they have the correct dimensions. > > > Overall, I request you to please let me know what you think of this approach (using DMDA) and I would be grateful if you could suggest me any alternatives. > > > Thanks and regards, > > Srikanth > From marcos.vanella at nist.gov Mon Jun 26 11:08:39 2023 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Mon, 26 Jun 2023 16:08:39 +0000 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: Than you Matt and Mark, I'll try your suggestions. To configure with hypre can I just use the --download-hypre configure line? That is what I did with suitesparse, very nice. ________________________________ From: Mark Adams Sent: Monday, June 26, 2023 12:05 PM To: Vanella, Marcos (Fed) Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution I'm not sure what MG is doing with an "unstructured" problem. I assume you are not using DMDA. -pc_type gamg should work I would configure with hypre and try that also: -pc_type hypre As Matt said MG should be faster. How many iterations was it taking? Try a 100^3 and check that the iteration count does not change much, if at all. Mark On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users > wrote: Hi, I was wondering if anyone has experience on what combinations are more efficient to solve a Poisson problem derived from a 7 point stencil on a single mesh (serial). I've been doing some tests of multigrid and cholesky on a 50^3 mesh. -pc_type mg takes about 75% more time than -pc_type cholesky -pc_factor_mat_solver_type cholmod for the case I'm testing. I'm new to PETSc so any suggestions are most welcome and appreciated, Marcos -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jun 26 11:11:25 2023 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 26 Jun 2023 12:11:25 -0400 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: On Mon, Jun 26, 2023 at 12:08?PM Vanella, Marcos (Fed) via petsc-users < petsc-users at mcs.anl.gov> wrote: > Than you Matt and Mark, I'll try your suggestions. To configure with hypre > can I just use the --download-hypre configure line? > Yes, Thanks, Matt > That is what I did with suitesparse, very nice. > ------------------------------ > *From:* Mark Adams > *Sent:* Monday, June 26, 2023 12:05 PM > *To:* Vanella, Marcos (Fed) > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] SOLVE + PC combination for 7 point stencil > (unstructured) poisson solution > > I'm not sure what MG is doing with an "unstructured" problem. I assume you > are not using DMDA. > -pc_type gamg should work > I would configure with hypre and try that also: -pc_type hypre > > As Matt said MG should be faster. How many iterations was it taking? > Try a 100^3 and check that the iteration count does not change much, if at > all. > > Mark > > > On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Hi, I was wondering if anyone has experience on what combinations are more > efficient to solve a Poisson problem derived from a 7 point stencil on a > single mesh (serial). > I've been doing some tests of multigrid and cholesky on a 50^3 mesh. *-pc_type > mg* takes about 75% more time than *-pc_type cholesky > -pc_factor_mat_solver_type cholmod* for the case I'm testing. > I'm new to PETSc so any suggestions are most welcome and appreciated, > Marcos > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Jun 26 14:32:19 2023 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 26 Jun 2023 15:32:19 -0400 Subject: [petsc-users] Using DMDA for a block-structured grid approach In-Reply-To: <29E83805-35D1-46B5-8812-27F6078F7E73@petsc.dev> References: <9f8889fc-e904-a67d-daf1-7b29acf052d9@mpcdf.mpg.de> <29E83805-35D1-46B5-8812-27F6078F7E73@petsc.dev> Message-ID: Let me backup a bit. I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach". Is this block structured solver well developed? We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids. Thanks, Mark On Mon, Jun 26, 2023 at 12:08?PM Barry Smith wrote: > > > > On Jun 26, 2023, at 11:44 AM, Srikanth Sathyanarayana > wrote: > > > > Dear PETSc developers, > > > > > > I am currently working on a Gyrokinetic code where I essentially have to > implement a block structured grid approach in one of the subdomains of the > phase space coordinates. I have attached one such example in the x - > v_parallel subdomains where I go from a full grid to a grid based on 4 > blocks (divided along x direction) which is still Cartesian but misaligned > across blocks (the grid is a very coarse representation). So the idea is to > basically create a library for the existing solver and try to implement the > block structured grid approach which mainly involves some sort of > interpolation between the blocks to align the points. > > > > > > I came up with an idea to implement this using DMDA. I looked into the > old threads where you have suggested using DMComposite in order to tackle > such problems although a clear path for the interpolation between the DM's > was not clarified. Nonetheless, my main questions were: > > > > 1. Do you still suggest using DMComposite to approach this problem. > > Unfortunately, that is all we have for combining DM's. You can use > unstructured, or structured or unstructed with quad-tree-type refinement > but we don't have a " > canned" approach for combining a bunch of structured grids together > efficiently and cleanly (lots of issues come up in trying to design such a > thing in a distributed memory environment since some blocks may need to > live on different number of MPI ranks) > > > > 2. Is there a way to use DMDA where the user provides the allocation? My > main problem is that I am not allowed to change the solvers data structure > > The allocation for what? > > > > 3. I looked into VecCreateMPIWithArray for the user provided allocation, > however I am not very sure if this Vector can be used with the DMDA > operations. > > Yes, you can use these variants to create vectors that you use with > DMDA; so long as they have the correct dimensions. > > > > > > Overall, I request you to please let me know what you think of this > approach (using DMDA) and I would be grateful if you could suggest me any > alternatives. > > > > > > Thanks and regards, > > > > Srikanth > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jun 26 16:39:09 2023 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 26 Jun 2023 17:39:09 -0400 Subject: [petsc-users] Using DMDA for a block-structured grid approach In-Reply-To: <69BE5FD5-74B9-4AA8-BB24-89096E47AF76@mpcdf.mpg.de> References: <9f8889fc-e904-a67d-daf1-7b29acf052d9@mpcdf.mpg.de> <29E83805-35D1-46B5-8812-27F6078F7E73@petsc.dev> <69BE5FD5-74B9-4AA8-BB24-89096E47AF76@mpcdf.mpg.de> Message-ID: <4AF87689-C03C-4F71-B714-A62AE32031CB@petsc.dev> > On Jun 26, 2023, at 5:12 PM, Srikanth Sathyanarayana wrote: > > Dear Barry and Mark, > > Thank you very much for your response. > >>> The allocation for what? > What I mean is that, we don?t want additional memory allocations through DMDA Vectors. I am not sure if it is even possible, basically we would want to map our existing vectors through VecCreateMPIWithArray for example and implement a way for it to interact with the DMDA structure so it can assist ghost updates for each block. So long as the vectors are the same size as those that DMDA would give you then they work just like you got them with DMDA. > Further, figure out a way to also perform some kind of interpolation between the block boundaries before the ghost exchange. > >> I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach". >> Is this block structured solver well developed? >> We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids. > We would like to develop a multi-block block-structured grid library mainly to reduce the number of grid points used. We want to use PETSc mainly as some kind of a distributed data container to simplify the process of performing interpolations between the blocks and help with the ghost exchanges. Currently, we are not looking into any grid refinement techniques. I suggest exploring if there are other libraries that provide multi-block block-structured grid that you might use, possible in conjunction with the PETSc solvers. Providing a general multi-block block-structured grid library is a big complicated enterprise and PETSc does not provide such a thing. Certain parts can be hacked with DMDA and DMCOMPOSITE but not properly as a properly designed library would. > > Thanks, > Srikanth > > >> On 26 Jun 2023, at 21:32, Mark Adams wrote: >> >> Let me backup a bit. >> I think you have an application that has a Cartesian, or a least fine, grid and you "have to implement a block structured grid approach". >> Is this block structured solver well developed? >> We have support for block structured (quad-tree) grids you might want to use. This is a common approach for block structured grids. >> >> Thanks, >> Mark >> >> >> >> On Mon, Jun 26, 2023 at 12:08?PM Barry Smith > wrote: >>> >>> >>> > On Jun 26, 2023, at 11:44 AM, Srikanth Sathyanarayana > wrote: >>> > >>> > Dear PETSc developers, >>> > >>> > >>> > I am currently working on a Gyrokinetic code where I essentially have to implement a block structured grid approach in one of the subdomains of the phase space coordinates. I have attached one such example in the x - v_parallel subdomains where I go from a full grid to a grid based on 4 blocks (divided along x direction) which is still Cartesian but misaligned across blocks (the grid is a very coarse representation). So the idea is to basically create a library for the existing solver and try to implement the block structured grid approach which mainly involves some sort of interpolation between the blocks to align the points. >>> > >>> > >>> > I came up with an idea to implement this using DMDA. I looked into the old threads where you have suggested using DMComposite in order to tackle such problems although a clear path for the interpolation between the DM's was not clarified. Nonetheless, my main questions were: >>> > >>> > 1. Do you still suggest using DMComposite to approach this problem. >>> >>> Unfortunately, that is all we have for combining DM's. You can use unstructured, or structured or unstructed with quad-tree-type refinement but we don't have a " >>> canned" approach for combining a bunch of structured grids together efficiently and cleanly (lots of issues come up in trying to design such a thing in a distributed memory environment since some blocks may need to live on different number of MPI ranks) >>> > >>> > 2. Is there a way to use DMDA where the user provides the allocation? My main problem is that I am not allowed to change the solvers data structure >>> >>> The allocation for what? >>> > >>> > 3. I looked into VecCreateMPIWithArray for the user provided allocation, however I am not very sure if this Vector can be used with the DMDA operations. >>> >>> Yes, you can use these variants to create vectors that you use with DMDA; so long as they have the correct dimensions. >>> > >>> > >>> > Overall, I request you to please let me know what you think of this approach (using DMDA) and I would be grateful if you could suggest me any alternatives. >>> > >>> > >>> > Thanks and regards, >>> > >>> > Srikanth >>> > >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Mon Jun 26 20:03:08 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Mon, 26 Jun 2023 18:03:08 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: <15FFDCF6-48C9-4331-A9FE-932BBDD418D1@lip6.fr> References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> <15FFDCF6-48C9-4331-A9FE-932BBDD418D1@lip6.fr> Message-ID: Returning to Sebastian's question about the correctness of the current LSC implementation: in the taxonomy paper that Jed linked to (which talks about SIMPLE, PCD, and LSC), equation 21 shows four applications of the inverse of the velocity mass matrix. In the PETSc implementation there are at most two applications of the reciprocal of the diagonal of A (an approximation to the velocity mass matrix without more plumbing, as already pointed out). It seems like for code implementations in which there are possible scaling differences between the velocity and pressure equations, that this difference in the number of inverse applications could be significant? I know Jed said that these scalings wouldn't really matter if you have a uniform grid, but I'm not 100% convinced yet. I might try fiddling around with adding two more reciprocal applications. On Fri, Jun 23, 2023 at 1:09?PM Pierre Jolivet wrote: > > On 23 Jun 2023, at 10:06 PM, Pierre Jolivet > wrote: > > > On 23 Jun 2023, at 9:39 PM, Alexander Lindsay > wrote: > > Ah, I see that if I use Pierre's new 'full' option for > -mat_schur_complement_ainv_type > > > That was not initially done by me > > > Oops, sorry for the noise, looks like it was done by me indeed > in 9399e4fd88c6621aad8fe9558ce84df37bd6fada? > > Thanks, > Pierre > > (though I recently tweaked MatSchurComplementComputeExplicitOperator() a > bit to use KSPMatSolve(), so that if you have a small Schur complement ? > which is not really the case for NS ? this could be a viable option, it was > previously painfully slow). > > Thanks, > Pierre > > that I get a single iteration for the Schur complement solve with LU. > That's a nice testing option > > On Fri, Jun 23, 2023 at 12:02?PM Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> I guess it is because the inverse of the diagonal form of A00 becomes a >> poor representation of the inverse of A00? I guess naively I would have >> thought that the blockdiag form of A00 is A00 >> >> On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay < >> alexlindsay239 at gmail.com> wrote: >> >>> Hi Jed, I will come back with answers to all of your questions at some >>> point. I mostly just deal with MOOSE users who come to me and tell me their >>> solve is converging slowly, asking me how to fix it. So I generally assume >>> they have built an appropriate mesh and problem size for the problem they >>> want to solve and added appropriate turbulence modeling (although my >>> general assumption is often violated). >>> >>> > And to confirm, are you doing a nonlinearly implicit velocity-pressure >>> solve? >>> >>> Yes, this is our default. >>> >>> A general question: it seems that it is well known that the quality of >>> selfp degrades with increasing advection. Why is that? >>> >>> On Wed, Jun 7, 2023 at 8:01?PM Jed Brown wrote: >>> >>>> Alexander Lindsay writes: >>>> >>>> > This has been a great discussion to follow. Regarding >>>> > >>>> >> when time stepping, you have enough mass matrix that cheaper >>>> preconditioners are good enough >>>> > >>>> > I'm curious what some algebraic recommendations might be for high Re >>>> in >>>> > transients. >>>> >>>> What mesh aspect ratio and streamline CFL number? Assuming your model >>>> is turbulent, can you say anything about momentum thickness Reynolds number >>>> Re_?? What is your wall normal spacing in plus units? (Wall resolved or >>>> wall modeled?) >>>> >>>> And to confirm, are you doing a nonlinearly implicit velocity-pressure >>>> solve? >>>> >>>> > I've found one-level DD to be ineffective when applied monolithically >>>> or to the momentum block of a split, as it scales with the mesh size. >>>> >>>> I wouldn't put too much weight on "scaling with mesh size" per se. You >>>> want an efficient solver for the coarsest mesh that delivers sufficient >>>> accuracy in your flow regime. Constants matter. >>>> >>>> Refining the mesh while holding time steps constant changes the >>>> advective CFL number as well as cell Peclet/cell Reynolds numbers. A >>>> meaningful scaling study is to increase Reynolds number (e.g., by growing >>>> the domain) while keeping mesh size matched in terms of plus units in the >>>> viscous sublayer and Kolmogorov length in the outer boundary layer. That >>>> turns out to not be a very automatic study to do, but it's what matters and >>>> you can spend a lot of time chasing ghosts with naive scaling studies. >>>> >>> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Mon Jun 26 20:06:26 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Mon, 26 Jun 2023 18:06:26 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> <15FFDCF6-48C9-4331-A9FE-932BBDD418D1@lip6.fr> Message-ID: I guess that similar to the discussions about selfp, the approximation of the velocity mass matrix by the diagonal of the velocity sub-matrix will improve when running a transient as opposed to a steady calculation, especially if the time derivative is lumped.... Just thinking while typing On Mon, Jun 26, 2023 at 6:03?PM Alexander Lindsay wrote: > Returning to Sebastian's question about the correctness of the current LSC > implementation: in the taxonomy paper that Jed linked to (which talks about > SIMPLE, PCD, and LSC), equation 21 shows four applications of the inverse > of the velocity mass matrix. In the PETSc implementation there are at most > two applications of the reciprocal of the diagonal of A (an approximation > to the velocity mass matrix without more plumbing, as already pointed out). > It seems like for code implementations in which there are possible scaling > differences between the velocity and pressure equations, that this > difference in the number of inverse applications could be significant? I > know Jed said that these scalings wouldn't really matter if you have a > uniform grid, but I'm not 100% convinced yet. > > I might try fiddling around with adding two more reciprocal applications. > > On Fri, Jun 23, 2023 at 1:09?PM Pierre Jolivet > wrote: > >> >> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet >> wrote: >> >> >> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay >> wrote: >> >> Ah, I see that if I use Pierre's new 'full' option for >> -mat_schur_complement_ainv_type >> >> >> That was not initially done by me >> >> >> Oops, sorry for the noise, looks like it was done by me indeed >> in 9399e4fd88c6621aad8fe9558ce84df37bd6fada? >> >> Thanks, >> Pierre >> >> (though I recently tweaked MatSchurComplementComputeExplicitOperator() a >> bit to use KSPMatSolve(), so that if you have a small Schur complement ? >> which is not really the case for NS ? this could be a viable option, it was >> previously painfully slow). >> >> Thanks, >> Pierre >> >> that I get a single iteration for the Schur complement solve with LU. >> That's a nice testing option >> >> On Fri, Jun 23, 2023 at 12:02?PM Alexander Lindsay < >> alexlindsay239 at gmail.com> wrote: >> >>> I guess it is because the inverse of the diagonal form of A00 becomes a >>> poor representation of the inverse of A00? I guess naively I would have >>> thought that the blockdiag form of A00 is A00 >>> >>> On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay < >>> alexlindsay239 at gmail.com> wrote: >>> >>>> Hi Jed, I will come back with answers to all of your questions at some >>>> point. I mostly just deal with MOOSE users who come to me and tell me their >>>> solve is converging slowly, asking me how to fix it. So I generally assume >>>> they have built an appropriate mesh and problem size for the problem they >>>> want to solve and added appropriate turbulence modeling (although my >>>> general assumption is often violated). >>>> >>>> > And to confirm, are you doing a nonlinearly implicit >>>> velocity-pressure solve? >>>> >>>> Yes, this is our default. >>>> >>>> A general question: it seems that it is well known that the quality of >>>> selfp degrades with increasing advection. Why is that? >>>> >>>> On Wed, Jun 7, 2023 at 8:01?PM Jed Brown wrote: >>>> >>>>> Alexander Lindsay writes: >>>>> >>>>> > This has been a great discussion to follow. Regarding >>>>> > >>>>> >> when time stepping, you have enough mass matrix that cheaper >>>>> preconditioners are good enough >>>>> > >>>>> > I'm curious what some algebraic recommendations might be for high Re >>>>> in >>>>> > transients. >>>>> >>>>> What mesh aspect ratio and streamline CFL number? Assuming your model >>>>> is turbulent, can you say anything about momentum thickness Reynolds number >>>>> Re_?? What is your wall normal spacing in plus units? (Wall resolved or >>>>> wall modeled?) >>>>> >>>>> And to confirm, are you doing a nonlinearly implicit velocity-pressure >>>>> solve? >>>>> >>>>> > I've found one-level DD to be ineffective when applied >>>>> monolithically or to the momentum block of a split, as it scales with the >>>>> mesh size. >>>>> >>>>> I wouldn't put too much weight on "scaling with mesh size" per se. You >>>>> want an efficient solver for the coarsest mesh that delivers sufficient >>>>> accuracy in your flow regime. Constants matter. >>>>> >>>>> Refining the mesh while holding time steps constant changes the >>>>> advective CFL number as well as cell Peclet/cell Reynolds numbers. A >>>>> meaningful scaling study is to increase Reynolds number (e.g., by growing >>>>> the domain) while keeping mesh size matched in terms of plus units in the >>>>> viscous sublayer and Kolmogorov length in the outer boundary layer. That >>>>> turns out to not be a very automatic study to do, but it's what matters and >>>>> you can spend a lot of time chasing ghosts with naive scaling studies. >>>>> >>>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From facklerpw at ornl.gov Tue Jun 27 09:30:40 2023 From: facklerpw at ornl.gov (Fackler, Philip) Date: Tue, 27 Jun 2023 14:30:40 +0000 Subject: [petsc-users] [EXTERNAL] Re: Initializing kokkos before petsc causes a problem In-Reply-To: References: Message-ID: Good morning Junchao! I'm following up here to see if there is any update to petsc to resolve this issue, or if we need to come up with a work-around. Thank you, Philip Fackler Research Software Engineer, Application Engineering Group Advanced Computing Systems Research Section Computer Science and Mathematics Division Oak Ridge National Laboratory ________________________________ From: Junchao Zhang Sent: Wednesday, June 7, 2023 22:45 To: Fackler, Philip Cc: petsc-users at mcs.anl.gov ; Blondel, Sophie ; xolotl-psi-development at lists.sourceforge.net Subject: [EXTERNAL] Re: [petsc-users] Initializing kokkos before petsc causes a problem Hi, Philip, Thanks for reporting. I will have a look at the issue. --Junchao Zhang On Wed, Jun 7, 2023 at 9:30?AM Fackler, Philip via petsc-users > wrote: I'm encountering a problem in xolotl. We initialize kokkos before initializing petsc. Therefore... The pointer referenced here: https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/impls/basic/kokkos/sfkok.kokkos.cxx#L363 from here: https://gitlab.com/petsc/petsc/-/blob/main/include/petsc_kokkos.hpp remains null because the code to initialize it is skipped here: https://gitlab.com/petsc/petsc/-/blob/main/src/sys/objects/kokkos/kinit.kokkos.cxx#L28 See line 71. Can this be modified to allow for kokkos to have been initialized by the application before initializing petsc? Thank you for your help, Philip Fackler Research Software Engineer, Application Engineering Group Advanced Computing Systems Research Section Computer Science and Mathematics Division Oak Ridge National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Jun 27 09:58:44 2023 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 27 Jun 2023 09:58:44 -0500 Subject: [petsc-users] [EXTERNAL] Re: Initializing kokkos before petsc causes a problem In-Reply-To: References: Message-ID: Hi, Philip, It's my fault. I should follow up early that this problem was fixed by https://gitlab.com/petsc/petsc/-/merge_requests/6586. Could you try petsc/main? Thanks. --Junchao Zhang On Tue, Jun 27, 2023 at 9:30?AM Fackler, Philip wrote: > Good morning Junchao! I'm following up here to see if there is any update > to petsc to resolve this issue, or if we need to come up with a work-around. > > Thank you, > > > *Philip Fackler * > Research Software Engineer, Application Engineering Group > Advanced Computing Systems Research Section > Computer Science and Mathematics Division > *Oak Ridge National Laboratory* > ------------------------------ > *From:* Junchao Zhang > *Sent:* Wednesday, June 7, 2023 22:45 > *To:* Fackler, Philip > *Cc:* petsc-users at mcs.anl.gov ; Blondel, Sophie < > sblondel at utk.edu>; xolotl-psi-development at lists.sourceforge.net < > xolotl-psi-development at lists.sourceforge.net> > *Subject:* [EXTERNAL] Re: [petsc-users] Initializing kokkos before petsc > causes a problem > > Hi, Philip, > Thanks for reporting. I will have a look at the issue. > --Junchao Zhang > > > On Wed, Jun 7, 2023 at 9:30?AM Fackler, Philip via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > I'm encountering a problem in xolotl. We initialize kokkos before > initializing petsc. Therefore... > > The pointer referenced here: > > https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/impls/basic/kokkos/sfkok.kokkos.cxx#L363 > > > > > from here: > https://gitlab.com/petsc/petsc/-/blob/main/include/petsc_kokkos.hpp > > > remains null because the code to initialize it is skipped here: > > https://gitlab.com/petsc/petsc/-/blob/main/src/sys/objects/kokkos/kinit.kokkos.cxx#L28 > > See line 71. > > Can this be modified to allow for kokkos to have been initialized by the > application before initializing petsc? > > Thank you for your help, > > > *Philip Fackler * > Research Software Engineer, Application Engineering Group > Advanced Computing Systems Research Section > Computer Science and Mathematics Division > *Oak Ridge National Laboratory* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos.vanella at nist.gov Tue Jun 27 10:23:44 2023 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Tue, 27 Jun 2023 15:23:44 +0000 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: Hi Mark and Matt, I tried swapping the preconditioner to cholmod and also the hypre Boomer AMG. They work just fine for my case. I also got my hands on a machine with NVIDIA gpus in one of our AI clusters. I compiled PETSc to make use of cuda and cuda-enabled openmpi (with gcc). I'm running the previous tests and want to also check some of the cuda enabled solvers. I was able to submit a case for the default Krylov solver with these runtime flags: -vec_type seqcuda -mat_type seqaijcusparse -pc_type cholesky -pc_factor_mat_solver_type cusparse. The case run to completion. I guess my question now is how do I monitor (if there is a way) that the GPU is being used in the calculation, and any other stats? Also, which other solver combination using GPU would you recommend for me to try? Can we compile PETSc with the cuda enabled version for CHOLMOD and HYPRE? Thank you for your help! Marcos ________________________________ From: Matthew Knepley Sent: Monday, June 26, 2023 12:11 PM To: Vanella, Marcos (Fed) Cc: Mark Adams ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution On Mon, Jun 26, 2023 at 12:08?PM Vanella, Marcos (Fed) via petsc-users > wrote: Than you Matt and Mark, I'll try your suggestions. To configure with hypre can I just use the --download-hypre configure line? Yes, Thanks, Matt That is what I did with suitesparse, very nice. ________________________________ From: Mark Adams > Sent: Monday, June 26, 2023 12:05 PM To: Vanella, Marcos (Fed) > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution I'm not sure what MG is doing with an "unstructured" problem. I assume you are not using DMDA. -pc_type gamg should work I would configure with hypre and try that also: -pc_type hypre As Matt said MG should be faster. How many iterations was it taking? Try a 100^3 and check that the iteration count does not change much, if at all. Mark On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users > wrote: Hi, I was wondering if anyone has experience on what combinations are more efficient to solve a Poisson problem derived from a 7 point stencil on a single mesh (serial). I've been doing some tests of multigrid and cholesky on a 50^3 mesh. -pc_type mg takes about 75% more time than -pc_type cholesky -pc_factor_mat_solver_type cholmod for the case I'm testing. I'm new to PETSc so any suggestions are most welcome and appreciated, Marcos -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fengshw3 at mail2.sysu.edu.cn Tue Jun 27 10:32:10 2023 From: fengshw3 at mail2.sysu.edu.cn (=?utf-8?B?5Yav5LiK546u?=) Date: Tue, 27 Jun 2023 23:32:10 +0800 Subject: [petsc-users] Problem in some macro when using VS+intel cl Message-ID: Hi,  After failure with MS-MPI once and once again, I tried icl+oneAPI and succeeded in installing and testing PESTc in Cygwin! However, (always however) when I copied the example code on Getting Started page on visual studio, there are tons of error like: I just wonder where the problem locates, I've googled this error message and it seems that it's induced by the difference of compilers, c.f. https://stackoverflow.com/questions/42136395/identifier-builtin-expect-is-undefined-during-ros-on-win-tutorial-talker-ex. But Intel says that they also provide such thing on icl, and I actually use this compiler instead of visual studio cl...  Anyway, the project could be built if I delete these error-checking macro. Installing feedback (or as a test result): When configure on windows, only icl + impi works, and in this case, both --with-cc and --with-cxx options need to point out the version like: --with-cc-std-c99 and --with-cxx-std-c++'ver'. Other combinations such as cl + impi, icl + msmpi, cl + msmpi never work. My tutor told me that older version of msmpi may work but I never try this. FENG. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 724CED62 at D9B26517.FA009B6400000000.jpg Type: image/jpeg Size: 186741 bytes Desc: not available URL: From facklerpw at ornl.gov Tue Jun 27 10:46:31 2023 From: facklerpw at ornl.gov (Fackler, Philip) Date: Tue, 27 Jun 2023 15:46:31 +0000 Subject: [petsc-users] [EXTERNAL] Re: Initializing kokkos before petsc causes a problem In-Reply-To: References: Message-ID: OK, great! I'll try it out soon. Thank you, Philip Fackler Research Software Engineer, Application Engineering Group Advanced Computing Systems Research Section Computer Science and Mathematics Division Oak Ridge National Laboratory ________________________________ From: Junchao Zhang Sent: Tuesday, June 27, 2023 10:58 To: Fackler, Philip Cc: petsc-users at mcs.anl.gov ; Blondel, Sophie ; xolotl-psi-development at lists.sourceforge.net Subject: Re: [EXTERNAL] Re: [petsc-users] Initializing kokkos before petsc causes a problem Hi, Philip, It's my fault. I should follow up early that this problem was fixed by https://gitlab.com/petsc/petsc/-/merge_requests/6586. Could you try petsc/main? Thanks. --Junchao Zhang On Tue, Jun 27, 2023 at 9:30?AM Fackler, Philip > wrote: Good morning Junchao! I'm following up here to see if there is any update to petsc to resolve this issue, or if we need to come up with a work-around. Thank you, Philip Fackler Research Software Engineer, Application Engineering Group Advanced Computing Systems Research Section Computer Science and Mathematics Division Oak Ridge National Laboratory ________________________________ From: Junchao Zhang > Sent: Wednesday, June 7, 2023 22:45 To: Fackler, Philip > Cc: petsc-users at mcs.anl.gov >; Blondel, Sophie >; xolotl-psi-development at lists.sourceforge.net > Subject: [EXTERNAL] Re: [petsc-users] Initializing kokkos before petsc causes a problem Hi, Philip, Thanks for reporting. I will have a look at the issue. --Junchao Zhang On Wed, Jun 7, 2023 at 9:30?AM Fackler, Philip via petsc-users > wrote: I'm encountering a problem in xolotl. We initialize kokkos before initializing petsc. Therefore... The pointer referenced here: https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/impls/basic/kokkos/sfkok.kokkos.cxx#L363 from here: https://gitlab.com/petsc/petsc/-/blob/main/include/petsc_kokkos.hpp remains null because the code to initialize it is skipped here: https://gitlab.com/petsc/petsc/-/blob/main/src/sys/objects/kokkos/kinit.kokkos.cxx#L28 See line 71. Can this be modified to allow for kokkos to have been initialized by the application before initializing petsc? Thank you for your help, Philip Fackler Research Software Engineer, Application Engineering Group Advanced Computing Systems Research Section Computer Science and Mathematics Division Oak Ridge National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From zisheng.ye at ansys.com Tue Jun 27 10:50:30 2023 From: zisheng.ye at ansys.com (Zisheng Ye) Date: Tue, 27 Jun 2023 15:50:30 +0000 Subject: [petsc-users] GAMG and Hypre preconditioner Message-ID: Dear PETSc Team We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG and Hypre preconditioners. We have encountered several issues that we would like to ask for your suggestions. First, we have couple of questions when working with a single MPI rank: 1. We have tested two backends, CUDA and Kokkos. One commonly encountered error is related to SpGEMM in CUDA when the mat is large as listed below: cudaMalloc((void **)&buffer2, bufferSize2) error( cudaErrorMemoryAllocation): out of memory For CUDA backend, one can use "-matmatmult_backend_cpu -matptap_backend_cpu" to avoid these problems. However, there seems no equivalent options in Kokkos backend. Is there any good practice to avoid this error for both backends and if we can avoid this error in Kokkos backend? 2. We have tested the combination of Hypre and Kokkos as backend. It looks like this combination is not compatible with each other, as we observed that KSPSolve takes a greater number of iterations to exit, and the residual norm in the post-checking is much larger than the one obtained when working with CUDA backend. This happens for matrices with block size larger than 1. Is there any explanation to the error? Second, we have couple more questions when working with multiple MPI ranks: 1. We are currently using OpenMPI as we couldnt get Intel MPI to work as a GPU-aware MPI, is this a known issue with Intel MPI? 2. With OpenMPI we currently see a slow down when increasing the MPI count as shown in the figure below, is this normal? [cid:9242808d-34af-4b51-8a0b-8295f0a012e5] Zisheng -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 85131 bytes Desc: image.png URL: From jed at jedbrown.org Tue Jun 27 12:02:37 2023 From: jed at jedbrown.org (Jed Brown) Date: Tue, 27 Jun 2023 11:02:37 -0600 Subject: [petsc-users] GAMG and Hypre preconditioner In-Reply-To: References: Message-ID: <87cz1g3kk2.fsf@jedbrown.org> Zisheng Ye via petsc-users writes: > Dear PETSc Team > > We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG and Hypre preconditioners. We have encountered several issues that we would like to ask for your suggestions. > > First, we have couple of questions when working with a single MPI rank: > > 1. We have tested two backends, CUDA and Kokkos. One commonly encountered error is related to SpGEMM in CUDA when the mat is large as listed below: > > cudaMalloc((void **)&buffer2, bufferSize2) error( cudaErrorMemoryAllocation): out of memory > > For CUDA backend, one can use "-matmatmult_backend_cpu -matptap_backend_cpu" to avoid these problems. However, there seems no equivalent options in Kokkos backend. Is there any good practice to avoid this error for both backends and if we can avoid this error in Kokkos backend? Junchao will know more about KK tuning, but the faster GPU matrix-matrix algorithms use extra memory. We should be able to make the host option available with kokkos. > 2. We have tested the combination of Hypre and Kokkos as backend. It looks like this combination is not compatible with each other, as we observed that KSPSolve takes a greater number of iterations to exit, and the residual norm in the post-checking is much larger than the one obtained when working with CUDA backend. This happens for matrices with block size larger than 1. Is there any explanation to the error? > > Second, we have couple more questions when working with multiple MPI ranks: > > 1. We are currently using OpenMPI as we couldnt get Intel MPI to work as a GPU-aware MPI, is this a known issue with Intel MPI? As far as I know, Intel's MPI is only for SYCL/Intel GPUs. In general, GPU-aware MPI has been incredibly flaky on all HPC systems despite being introduced ten years ago. > 2. With OpenMPI we currently see a slow down when increasing the MPI count as shown in the figure below, is this normal? Could you share -log_view output from a couple representative runs? You could send those here or to petsc-maint at mcs.anl.gov. We need to see what kind of work is not scaling to attribute what may be causing it. From alexlindsay239 at gmail.com Tue Jun 27 12:41:58 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Tue, 27 Jun 2023 10:41:58 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> <15FFDCF6-48C9-4331-A9FE-932BBDD418D1@lip6.fr> Message-ID: I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which adds a couple more scaling applications of the inverse of the diagonal of A On Mon, Jun 26, 2023 at 6:06?PM Alexander Lindsay wrote: > I guess that similar to the discussions about selfp, the approximation of > the velocity mass matrix by the diagonal of the velocity sub-matrix will > improve when running a transient as opposed to a steady calculation, > especially if the time derivative is lumped.... Just thinking while typing > > On Mon, Jun 26, 2023 at 6:03?PM Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> Returning to Sebastian's question about the correctness of the current >> LSC implementation: in the taxonomy paper that Jed linked to (which talks >> about SIMPLE, PCD, and LSC), equation 21 shows four applications of the >> inverse of the velocity mass matrix. In the PETSc implementation there are >> at most two applications of the reciprocal of the diagonal of A (an >> approximation to the velocity mass matrix without more plumbing, as already >> pointed out). It seems like for code implementations in which there are >> possible scaling differences between the velocity and pressure equations, >> that this difference in the number of inverse applications could be >> significant? I know Jed said that these scalings wouldn't really matter if >> you have a uniform grid, but I'm not 100% convinced yet. >> >> I might try fiddling around with adding two more reciprocal applications. >> >> On Fri, Jun 23, 2023 at 1:09?PM Pierre Jolivet >> wrote: >> >>> >>> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet >>> wrote: >>> >>> >>> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay >>> wrote: >>> >>> Ah, I see that if I use Pierre's new 'full' option for >>> -mat_schur_complement_ainv_type >>> >>> >>> That was not initially done by me >>> >>> >>> Oops, sorry for the noise, looks like it was done by me indeed >>> in 9399e4fd88c6621aad8fe9558ce84df37bd6fada? >>> >>> Thanks, >>> Pierre >>> >>> (though I recently tweaked MatSchurComplementComputeExplicitOperator() a >>> bit to use KSPMatSolve(), so that if you have a small Schur complement ? >>> which is not really the case for NS ? this could be a viable option, it was >>> previously painfully slow). >>> >>> Thanks, >>> Pierre >>> >>> that I get a single iteration for the Schur complement solve with LU. >>> That's a nice testing option >>> >>> On Fri, Jun 23, 2023 at 12:02?PM Alexander Lindsay < >>> alexlindsay239 at gmail.com> wrote: >>> >>>> I guess it is because the inverse of the diagonal form of A00 becomes a >>>> poor representation of the inverse of A00? I guess naively I would have >>>> thought that the blockdiag form of A00 is A00 >>>> >>>> On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay < >>>> alexlindsay239 at gmail.com> wrote: >>>> >>>>> Hi Jed, I will come back with answers to all of your questions at some >>>>> point. I mostly just deal with MOOSE users who come to me and tell me their >>>>> solve is converging slowly, asking me how to fix it. So I generally assume >>>>> they have built an appropriate mesh and problem size for the problem they >>>>> want to solve and added appropriate turbulence modeling (although my >>>>> general assumption is often violated). >>>>> >>>>> > And to confirm, are you doing a nonlinearly implicit >>>>> velocity-pressure solve? >>>>> >>>>> Yes, this is our default. >>>>> >>>>> A general question: it seems that it is well known that the quality of >>>>> selfp degrades with increasing advection. Why is that? >>>>> >>>>> On Wed, Jun 7, 2023 at 8:01?PM Jed Brown wrote: >>>>> >>>>>> Alexander Lindsay writes: >>>>>> >>>>>> > This has been a great discussion to follow. Regarding >>>>>> > >>>>>> >> when time stepping, you have enough mass matrix that cheaper >>>>>> preconditioners are good enough >>>>>> > >>>>>> > I'm curious what some algebraic recommendations might be for high >>>>>> Re in >>>>>> > transients. >>>>>> >>>>>> What mesh aspect ratio and streamline CFL number? Assuming your model >>>>>> is turbulent, can you say anything about momentum thickness Reynolds number >>>>>> Re_?? What is your wall normal spacing in plus units? (Wall resolved or >>>>>> wall modeled?) >>>>>> >>>>>> And to confirm, are you doing a nonlinearly implicit >>>>>> velocity-pressure solve? >>>>>> >>>>>> > I've found one-level DD to be ineffective when applied >>>>>> monolithically or to the momentum block of a split, as it scales with the >>>>>> mesh size. >>>>>> >>>>>> I wouldn't put too much weight on "scaling with mesh size" per se. >>>>>> You want an efficient solver for the coarsest mesh that delivers sufficient >>>>>> accuracy in your flow regime. Constants matter. >>>>>> >>>>>> Refining the mesh while holding time steps constant changes the >>>>>> advective CFL number as well as cell Peclet/cell Reynolds numbers. A >>>>>> meaningful scaling study is to increase Reynolds number (e.g., by growing >>>>>> the domain) while keeping mesh size matched in terms of plus units in the >>>>>> viscous sublayer and Kolmogorov length in the outer boundary layer. That >>>>>> turns out to not be a very automatic study to do, but it's what matters and >>>>>> you can spend a lot of time chasing ghosts with naive scaling studies. >>>>>> >>>>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jun 27 12:57:47 2023 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 27 Jun 2023 13:57:47 -0400 Subject: [petsc-users] Problem in some macro when using VS+intel cl In-Reply-To: References: Message-ID: Regarding PetscCall(). It sounds like you are working with two different versions of PETSc with different compilers? This isn't practical since things do change (improve we hope) with newer versions of PETSc. You should just built the latest version of PETSc with all the compiler suites you are interested in. Barry > On Jun 27, 2023, at 11:32 AM, ??? wrote: > > Hi, > > After failure with MS-MPI once and once again, I tried icl+oneAPI and succeeded in installing and testing PESTc in Cygwin! > > However, (always however) when I copied the example code on Getting Started page on visual studio, there are tons of error like: > <724CED62 at D9B26517.FA009B6400000000.jpg> > I just wonder where the problem locates, I've googled this error message and it seems that it's induced by the difference of compilers, c.f. https://stackoverflow.com/questions/42136395/identifier-builtin-expect-is-undefined-during-ros-on-win-tutorial-talker-ex. But Intel says that they also provide such thing on icl, and I actually use this compiler instead of visual studio cl... > > Anyway, the project could be built if I delete these error-checking macro. > > Installing feedback (or as a test result): > When configure on windows, only icl + impi works, and in this case, both --with-cc and --with-cxx options need to point out the version like: --with-cc-std-c99 and --with-cxx-std-c++'ver'. Other combinations such as cl + impi, icl + msmpi, cl + msmpi never work. My tutor told me that older version of msmpi may work but I never try this. > > FENG. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jun 27 12:59:09 2023 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 27 Jun 2023 13:59:09 -0400 Subject: [petsc-users] Problem in some macro when using VS+intel cl In-Reply-To: References: Message-ID: On Tue, Jun 27, 2023 at 11:32?AM ??? wrote: > Hi, > > After failure with MS-MPI once and once again, I tried icl+oneAPI and > succeeded in installing and testing PESTc in Cygwin! > > However, (always however) when I copied the example code on Getting > Started page on visual studio, there are tons of error like: > I just wonder where the problem locates, I've googled this error message > and it seems that it's induced by the difference of compilers, c.f. > https://stackoverflow.com/questions/42136395/identifier-builtin-expect-is-undefined-during-ros-on-win-tutorial-talker-ex. > But Intel says that they also provide such thing on icl, and I actually use > this compiler instead of visual studio cl... > The IDE is not showing the actual error message. Are you sure that your IDE build has the right includes and libraries? You can get these using cd $PETSC_DIR make getincludedirs make getlinklibs Thanks, Matt > Anyway, the project could be built if I delete these error-checking macro. > > Installing feedback (or as a test result): > When configure on windows, only icl + impi works, and in this case, both > --with-cc and --with-cxx options need to point out the version like: > --with-cc-std-c99 and --with-cxx-std-c++'ver'. Other combinations such as > cl + impi, icl + msmpi, cl + msmpi never work. My tutor told me that older > version of msmpi may work but I never try this. > > FENG. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 724CED62 at D9B26517.FA009B6400000000.jpg Type: image/jpeg Size: 186741 bytes Desc: not available URL: From zisheng.ye at ansys.com Tue Jun 27 13:00:08 2023 From: zisheng.ye at ansys.com (Zisheng Ye) Date: Tue, 27 Jun 2023 18:00:08 +0000 Subject: [petsc-users] GAMG and Hypre preconditioner In-Reply-To: <87cz1g3kk2.fsf@jedbrown.org> References: <87cz1g3kk2.fsf@jedbrown.org> Message-ID: Hi Jed Thanks for your reply. I have sent the log files to petsc-maint at mcs.anl.gov. Zisheng ________________________________ From: Jed Brown Sent: Tuesday, June 27, 2023 1:02 PM To: Zisheng Ye ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] GAMG and Hypre preconditioner [External Sender] Zisheng Ye via petsc-users writes: > Dear PETSc Team > > We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG and Hypre preconditioners. We have encountered several issues that we would like to ask for your suggestions. > > First, we have couple of questions when working with a single MPI rank: > > 1. We have tested two backends, CUDA and Kokkos. One commonly encountered error is related to SpGEMM in CUDA when the mat is large as listed below: > > cudaMalloc((void **)&buffer2, bufferSize2) error( cudaErrorMemoryAllocation): out of memory > > For CUDA backend, one can use "-matmatmult_backend_cpu -matptap_backend_cpu" to avoid these problems. However, there seems no equivalent options in Kokkos backend. Is there any good practice to avoid this error for both backends and if we can avoid this error in Kokkos backend? Junchao will know more about KK tuning, but the faster GPU matrix-matrix algorithms use extra memory. We should be able to make the host option available with kokkos. > 2. We have tested the combination of Hypre and Kokkos as backend. It looks like this combination is not compatible with each other, as we observed that KSPSolve takes a greater number of iterations to exit, and the residual norm in the post-checking is much larger than the one obtained when working with CUDA backend. This happens for matrices with block size larger than 1. Is there any explanation to the error? > > Second, we have couple more questions when working with multiple MPI ranks: > > 1. We are currently using OpenMPI as we couldnt get Intel MPI to work as a GPU-aware MPI, is this a known issue with Intel MPI? As far as I know, Intel's MPI is only for SYCL/Intel GPUs. In general, GPU-aware MPI has been incredibly flaky on all HPC systems despite being introduced ten years ago. > 2. With OpenMPI we currently see a slow down when increasing the MPI count as shown in the figure below, is this normal? Could you share -log_view output from a couple representative runs? You could send those here or to petsc-maint at mcs.anl.gov. We need to see what kind of work is not scaling to attribute what may be causing it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jun 27 13:08:06 2023 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 27 Jun 2023 14:08:06 -0400 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: On Tue, Jun 27, 2023 at 11:23?AM Vanella, Marcos (Fed) < marcos.vanella at nist.gov> wrote: > Hi Mark and Matt, I tried swapping the preconditioner to cholmod and also > the hypre Boomer AMG. They work just fine for my case. I also got my hands > on a machine with NVIDIA gpus in one of our AI clusters. I compiled PETSc > to make use of cuda and cuda-enabled openmpi (with gcc). > I'm running the previous tests and want to also check some of the cuda > enabled solvers. I was able to submit a case for the default Krylov solver > with these runtime flags: -vec_type seqcuda -mat_type seqaijcusparse > -pc_type cholesky -pc_factor_mat_solver_type cusparse. The case run to > completion. > > I guess my question now is how do I monitor (if there is a way) that the > GPU is being used in the calculation, and any other stats? > You should get that automatically with -log_view If you want finer-grained profiling of the kernels, you can use -log_view_gpu_time but it can slows things down. > Also, which other solver combination using GPU would you recommend for me > to try? Can we compile PETSc with the cuda enabled version for CHOLMOD and > HYPRE? > Hypre has GPU support but not CHOLMOD. There are no rules of thumb right now for GPUs. It depends on what card you have, what version of the driver, what version of the libraries, etc. It is very fragile. Hopefully this period ends soon, but I am not optimistic. Unless you are very confident that GPUs will help, I would not recommend spending the time. Thanks, Matt > Thank you for your help! > Marcos > > ------------------------------ > *From:* Matthew Knepley > *Sent:* Monday, June 26, 2023 12:11 PM > *To:* Vanella, Marcos (Fed) > *Cc:* Mark Adams ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Subject:* Re: [petsc-users] SOLVE + PC combination for 7 point stencil > (unstructured) poisson solution > > On Mon, Jun 26, 2023 at 12:08?PM Vanella, Marcos (Fed) via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Than you Matt and Mark, I'll try your suggestions. To configure with hypre > can I just use the --download-hypre configure line? > > > Yes, > > Thanks, > > Matt > > > That is what I did with suitesparse, very nice. > ------------------------------ > *From:* Mark Adams > *Sent:* Monday, June 26, 2023 12:05 PM > *To:* Vanella, Marcos (Fed) > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] SOLVE + PC combination for 7 point stencil > (unstructured) poisson solution > > I'm not sure what MG is doing with an "unstructured" problem. I assume you > are not using DMDA. > -pc_type gamg should work > I would configure with hypre and try that also: -pc_type hypre > > As Matt said MG should be faster. How many iterations was it taking? > Try a 100^3 and check that the iteration count does not change much, if at > all. > > Mark > > > On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Hi, I was wondering if anyone has experience on what combinations are more > efficient to solve a Poisson problem derived from a 7 point stencil on a > single mesh (serial). > I've been doing some tests of multigrid and cholesky on a 50^3 mesh. *-pc_type > mg* takes about 75% more time than *-pc_type cholesky > -pc_factor_mat_solver_type cholmod* for the case I'm testing. > I'm new to PETSc so any suggestions are most welcome and appreciated, > Marcos > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junming.duan at epfl.ch Tue Jun 27 13:19:23 2023 From: junming.duan at epfl.ch (Duan Junming) Date: Tue, 27 Jun 2023 18:19:23 +0000 Subject: [petsc-users] How to build compatible MPI matrix for dmplex Message-ID: Dear all, I try to create a compatible sparse MPI matrix A with dmplex global vector x, so I can do matrix-vector multiplication y = A*x. I think I can first get the local and global sizes of x on comm, say n and N, also sizes of y, m, M, then create A by using MatCreate(comm, &A), set the sizes using MatSetSizes(A, m, n, M, N), set the type using MatSetType(A, MATMPIAIJ). Is this process correct? Another question is: Do the entries not filled automatically compressed out? Thanks! Junming -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jun 27 13:28:10 2023 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 27 Jun 2023 14:28:10 -0400 Subject: [petsc-users] How to build compatible MPI matrix for dmplex In-Reply-To: References: Message-ID: On Tue, Jun 27, 2023 at 2:20?PM Duan Junming via petsc-users < petsc-users at mcs.anl.gov> wrote: > Dear all, > > > I try to create a compatible sparse MPI matrix A with dmplex global vector > x, so I can do matrix-vector multiplication y = A*x. > > I think I can first get the local and global sizes of x on comm, say n and > N, also sizes of y, m, M, > > then create A by using MatCreate(comm, &A), set the sizes using > MatSetSizes(A, m, n, M, N), set the type using MatSetType(A, MATMPIAIJ). Is > this process correct? > Yes. > Another question is: Do the entries not filled automatically compressed > out? > Yes. Thanks, Matt > Thanks! > > Junming > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos.vanella at nist.gov Tue Jun 27 13:56:47 2023 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Tue, 27 Jun 2023 18:56:47 +0000 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: Thank you Matt. I'll try the flags you recommend for monitoring. Correct, I'm trying to see if GPU would provide an advantage for this particular Poisson solution we do in our code. Our grids are staggered with the Poisson unknown in cell centers. All my tests for single mesh runs with 100K to 200K meshes show MKL PARDISO as the faster option for these meshes considering the mesh as unstructured (an implementation separate from the PETSc option). We have the option of Fishpack (fast trigonometric solvers), but that is not as general (requires solution on the whole mesh + a special treatment of immersed geometry). The single mesh solver is used as a black box within a fixed point domain decomposition iteration in multi-mesh cases. The approximation error in this method is confined to the mesh boundaries. The other option I have tried with MKL is to build the global matrix across all meshes and use the MKL cluster sparse solver. The problem becomes a memory one for meshes that go over a couple million unknowns due to the exact Cholesky factorization matrix storage. I'm thinking the other possibility using PETSc is to build in parallel the global matrix (as done for the MKL global solver) and try the GPU accelerated Krylov + multigrid preconditioner. If this can bring down the time to solution to what we get for the previous scheme and keep memory use undrr control it would be a good option for CPU+GPU systems. Thing is we need to bring the residual of the equation to ~10^-10 or less to avoid instability so it might still be costly. I'll keep you updated. Thanks, Marcos ________________________________ From: Matthew Knepley Sent: Tuesday, June 27, 2023 2:08 PM To: Vanella, Marcos (Fed) Cc: Mark Adams ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution On Tue, Jun 27, 2023 at 11:23?AM Vanella, Marcos (Fed) > wrote: Hi Mark and Matt, I tried swapping the preconditioner to cholmod and also the hypre Boomer AMG. They work just fine for my case. I also got my hands on a machine with NVIDIA gpus in one of our AI clusters. I compiled PETSc to make use of cuda and cuda-enabled openmpi (with gcc). I'm running the previous tests and want to also check some of the cuda enabled solvers. I was able to submit a case for the default Krylov solver with these runtime flags: -vec_type seqcuda -mat_type seqaijcusparse -pc_type cholesky -pc_factor_mat_solver_type cusparse. The case run to completion. I guess my question now is how do I monitor (if there is a way) that the GPU is being used in the calculation, and any other stats? You should get that automatically with -log_view If you want finer-grained profiling of the kernels, you can use -log_view_gpu_time but it can slows things down. Also, which other solver combination using GPU would you recommend for me to try? Can we compile PETSc with the cuda enabled version for CHOLMOD and HYPRE? Hypre has GPU support but not CHOLMOD. There are no rules of thumb right now for GPUs. It depends on what card you have, what version of the driver, what version of the libraries, etc. It is very fragile. Hopefully this period ends soon, but I am not optimistic. Unless you are very confident that GPUs will help, I would not recommend spending the time. Thanks, Matt Thank you for your help! Marcos ________________________________ From: Matthew Knepley > Sent: Monday, June 26, 2023 12:11 PM To: Vanella, Marcos (Fed) > Cc: Mark Adams >; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution On Mon, Jun 26, 2023 at 12:08?PM Vanella, Marcos (Fed) via petsc-users > wrote: Than you Matt and Mark, I'll try your suggestions. To configure with hypre can I just use the --download-hypre configure line? Yes, Thanks, Matt That is what I did with suitesparse, very nice. ________________________________ From: Mark Adams > Sent: Monday, June 26, 2023 12:05 PM To: Vanella, Marcos (Fed) > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution I'm not sure what MG is doing with an "unstructured" problem. I assume you are not using DMDA. -pc_type gamg should work I would configure with hypre and try that also: -pc_type hypre As Matt said MG should be faster. How many iterations was it taking? Try a 100^3 and check that the iteration count does not change much, if at all. Mark On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users > wrote: Hi, I was wondering if anyone has experience on what combinations are more efficient to solve a Poisson problem derived from a 7 point stencil on a single mesh (serial). I've been doing some tests of multigrid and cholesky on a 50^3 mesh. -pc_type mg takes about 75% more time than -pc_type cholesky -pc_factor_mat_solver_type cholmod for the case I'm testing. I'm new to PETSc so any suggestions are most welcome and appreciated, Marcos -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos.vanella at nist.gov Tue Jun 27 14:03:58 2023 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Tue, 27 Jun 2023 19:03:58 +0000 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: Sorry, meant 100K to 200K cells. Also, check the release page of suitesparse. The mutli-GPU version of cholmod might be coming soon: https://people.engr.tamu.edu/davis/SuiteSparse/index.html ________________________________ From: Vanella, Marcos (Fed) Sent: Tuesday, June 27, 2023 2:56 PM To: Matthew Knepley Cc: Mark Adams ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution Thank you Matt. I'll try the flags you recommend for monitoring. Correct, I'm trying to see if GPU would provide an advantage for this particular Poisson solution we do in our code. Our grids are staggered with the Poisson unknown in cell centers. All my tests for single mesh runs with 100K to 200K meshes show MKL PARDISO as the faster option for these meshes considering the mesh as unstructured (an implementation separate from the PETSc option). We have the option of Fishpack (fast trigonometric solvers), but that is not as general (requires solution on the whole mesh + a special treatment of immersed geometry). The single mesh solver is used as a black box within a fixed point domain decomposition iteration in multi-mesh cases. The approximation error in this method is confined to the mesh boundaries. The other option I have tried with MKL is to build the global matrix across all meshes and use the MKL cluster sparse solver. The problem becomes a memory one for meshes that go over a couple million unknowns due to the exact Cholesky factorization matrix storage. I'm thinking the other possibility using PETSc is to build in parallel the global matrix (as done for the MKL global solver) and try the GPU accelerated Krylov + multigrid preconditioner. If this can bring down the time to solution to what we get for the previous scheme and keep memory use undrr control it would be a good option for CPU+GPU systems. Thing is we need to bring the residual of the equation to ~10^-10 or less to avoid instability so it might still be costly. I'll keep you updated. Thanks, Marcos ________________________________ From: Matthew Knepley Sent: Tuesday, June 27, 2023 2:08 PM To: Vanella, Marcos (Fed) Cc: Mark Adams ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution On Tue, Jun 27, 2023 at 11:23?AM Vanella, Marcos (Fed) > wrote: Hi Mark and Matt, I tried swapping the preconditioner to cholmod and also the hypre Boomer AMG. They work just fine for my case. I also got my hands on a machine with NVIDIA gpus in one of our AI clusters. I compiled PETSc to make use of cuda and cuda-enabled openmpi (with gcc). I'm running the previous tests and want to also check some of the cuda enabled solvers. I was able to submit a case for the default Krylov solver with these runtime flags: -vec_type seqcuda -mat_type seqaijcusparse -pc_type cholesky -pc_factor_mat_solver_type cusparse. The case run to completion. I guess my question now is how do I monitor (if there is a way) that the GPU is being used in the calculation, and any other stats? You should get that automatically with -log_view If you want finer-grained profiling of the kernels, you can use -log_view_gpu_time but it can slows things down. Also, which other solver combination using GPU would you recommend for me to try? Can we compile PETSc with the cuda enabled version for CHOLMOD and HYPRE? Hypre has GPU support but not CHOLMOD. There are no rules of thumb right now for GPUs. It depends on what card you have, what version of the driver, what version of the libraries, etc. It is very fragile. Hopefully this period ends soon, but I am not optimistic. Unless you are very confident that GPUs will help, I would not recommend spending the time. Thanks, Matt Thank you for your help! Marcos ________________________________ From: Matthew Knepley > Sent: Monday, June 26, 2023 12:11 PM To: Vanella, Marcos (Fed) > Cc: Mark Adams >; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution On Mon, Jun 26, 2023 at 12:08?PM Vanella, Marcos (Fed) via petsc-users > wrote: Than you Matt and Mark, I'll try your suggestions. To configure with hypre can I just use the --download-hypre configure line? Yes, Thanks, Matt That is what I did with suitesparse, very nice. ________________________________ From: Mark Adams > Sent: Monday, June 26, 2023 12:05 PM To: Vanella, Marcos (Fed) > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution I'm not sure what MG is doing with an "unstructured" problem. I assume you are not using DMDA. -pc_type gamg should work I would configure with hypre and try that also: -pc_type hypre As Matt said MG should be faster. How many iterations was it taking? Try a 100^3 and check that the iteration count does not change much, if at all. Mark On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users > wrote: Hi, I was wondering if anyone has experience on what combinations are more efficient to solve a Poisson problem derived from a 7 point stencil on a single mesh (serial). I've been doing some tests of multigrid and cholesky on a 50^3 mesh. -pc_type mg takes about 75% more time than -pc_type cholesky -pc_factor_mat_solver_type cholmod for the case I'm testing. I'm new to PETSc so any suggestions are most welcome and appreciated, Marcos -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jun 27 14:11:32 2023 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 27 Jun 2023 15:11:32 -0400 Subject: [petsc-users] SOLVE + PC combination for 7 point stencil (unstructured) poisson solution In-Reply-To: References: Message-ID: On Tue, Jun 27, 2023 at 2:56?PM Vanella, Marcos (Fed) < marcos.vanella at nist.gov> wrote: > Thank you Matt. I'll try the flags you recommend for monitoring. Correct, > I'm trying to see if GPU would provide an advantage for this particular > Poisson solution we do in our code. > > Our grids are staggered with the Poisson unknown in cell centers. All my > tests for single mesh runs with 100K to 200K meshes show MKL PARDISO as the > faster option for these meshes considering the mesh as unstructured (an > implementation separate from the PETSc option). We have the option of > Fishpack (fast trigonometric solvers), but that is not as general (requires > solution on the whole mesh + a special treatment of immersed geometry). The > single mesh solver is used as a black box within a fixed point domain > decomposition iteration in multi-mesh cases. The approximation error in > this method is confined to the mesh boundaries. > > The other option I have tried with MKL is to build the global matrix > across all meshes and use the MKL cluster sparse solver. The problem > becomes a memory one for meshes that go over a couple million unknowns due > to the exact Cholesky factorization matrix storage. I'm thinking the other > possibility using PETSc is to build in parallel the global matrix (as done > for the MKL global solver) and try the GPU accelerated Krylov + multigrid > preconditioner. If this can bring down the time to solution to what we get > for the previous scheme and keep memory use undrr control it would be a > good option for CPU+GPU systems. Thing is we need to bring the residual of > the equation to ~10^-10 or less to avoid instability so it might still be > costly. > Yes, this is definitely the option I would try. First, I would just use AMG (GAMG, Hypre, ML). If those work, you can speed up the setup time and bring down memory somewhat with GMG. Since your grid is Cartesian, you could use DMDA to do this easily. Thanks, Matt > I'll keep you updated. Thanks, > Marcos > ------------------------------ > *From:* Matthew Knepley > *Sent:* Tuesday, June 27, 2023 2:08 PM > *To:* Vanella, Marcos (Fed) > *Cc:* Mark Adams ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Subject:* Re: [petsc-users] SOLVE + PC combination for 7 point stencil > (unstructured) poisson solution > > On Tue, Jun 27, 2023 at 11:23?AM Vanella, Marcos (Fed) < > marcos.vanella at nist.gov> wrote: > > Hi Mark and Matt, I tried swapping the preconditioner to cholmod and also > the hypre Boomer AMG. They work just fine for my case. I also got my hands > on a machine with NVIDIA gpus in one of our AI clusters. I compiled PETSc > to make use of cuda and cuda-enabled openmpi (with gcc). > I'm running the previous tests and want to also check some of the cuda > enabled solvers. I was able to submit a case for the default Krylov solver > with these runtime flags: -vec_type seqcuda -mat_type seqaijcusparse > -pc_type cholesky -pc_factor_mat_solver_type cusparse. The case run to > completion. > > I guess my question now is how do I monitor (if there is a way) that the > GPU is being used in the calculation, and any other stats? > > > You should get that automatically with > > -log_view > > If you want finer-grained profiling of the kernels, you can use > > -log_view_gpu_time > > but it can slows things down. > > > Also, which other solver combination using GPU would you recommend for me > to try? Can we compile PETSc with the cuda enabled version for CHOLMOD and > HYPRE? > > > Hypre has GPU support but not CHOLMOD. There are no rules of thumb right > now for GPUs. It depends on what card you have, what version of the driver, > what version of the libraries, etc. It is very fragile. Hopefully this > period ends soon, but I am not optimistic. Unless you are very confident > that GPUs will help, > I would not recommend spending the time. > > Thanks, > > Matt > > > Thank you for your help! > Marcos > > ------------------------------ > *From:* Matthew Knepley > *Sent:* Monday, June 26, 2023 12:11 PM > *To:* Vanella, Marcos (Fed) > *Cc:* Mark Adams ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Subject:* Re: [petsc-users] SOLVE + PC combination for 7 point stencil > (unstructured) poisson solution > > On Mon, Jun 26, 2023 at 12:08?PM Vanella, Marcos (Fed) via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Than you Matt and Mark, I'll try your suggestions. To configure with hypre > can I just use the --download-hypre configure line? > > > Yes, > > Thanks, > > Matt > > > That is what I did with suitesparse, very nice. > ------------------------------ > *From:* Mark Adams > *Sent:* Monday, June 26, 2023 12:05 PM > *To:* Vanella, Marcos (Fed) > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] SOLVE + PC combination for 7 point stencil > (unstructured) poisson solution > > I'm not sure what MG is doing with an "unstructured" problem. I assume you > are not using DMDA. > -pc_type gamg should work > I would configure with hypre and try that also: -pc_type hypre > > As Matt said MG should be faster. How many iterations was it taking? > Try a 100^3 and check that the iteration count does not change much, if at > all. > > Mark > > > On Mon, Jun 26, 2023 at 11:35?AM Vanella, Marcos (Fed) via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Hi, I was wondering if anyone has experience on what combinations are more > efficient to solve a Poisson problem derived from a 7 point stencil on a > single mesh (serial). > I've been doing some tests of multigrid and cholesky on a 50^3 mesh. *-pc_type > mg* takes about 75% more time than *-pc_type cholesky > -pc_factor_mat_solver_type cholmod* for the case I'm testing. > I'm new to PETSc so any suggestions are most welcome and appreciated, > Marcos > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fengshw3 at mail2.sysu.edu.cn Tue Jun 27 21:09:48 2023 From: fengshw3 at mail2.sysu.edu.cn (=?utf-8?B?5Yav5LiK546u?=) Date: Wed, 28 Jun 2023 10:09:48 +0800 Subject: [petsc-users] Problem in some macro when using VS+intel cl In-Reply-To: References: Message-ID: I've followed your advice and include the header's file and libraries in Visual Studio. Such "error" still shows but I can build the project! It's strange! I expand the CHKERRQ macro and find the error actually locates at   What I know from google is that the "__builtin_expect__" is defined in GCC, so is it unsolvable in Windows with visual studio C compiler or Inter C compiler? ------------------ Original ------------------ From:  "Matthew Knepley" -------------- next part -------------- A non-text attachment was scrubbed... Name: CCB1C477 at 3FC4424F.6C969B6400000000.jpg Type: image/jpeg Size: 186741 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2F07716E at 673DAE5F.6C969B6400000000.bmp Type: application/octet-stream Size: 2280918 bytes Desc: not available URL: From bsmith at petsc.dev Tue Jun 27 21:24:38 2023 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 27 Jun 2023 22:24:38 -0400 Subject: [petsc-users] Problem in some macro when using VS+intel cl In-Reply-To: References: Message-ID: <02C9D5D1-EAC5-423E-8CE6-35C39EBF11BE@petsc.dev> The macros expand differently depending on the compiler being used. In this case #if defined(PETSC_HAVE_BUILTIN_EXPECT) #define PetscUnlikely(cond) __builtin_expect(!!(cond), 0) #define PetscLikely(cond) __builtin_expect(!!(cond), 1) #else #define PetscUnlikely(cond) (cond) #define PetscLikely(cond) (cond) #endif So with Microsoft Windows compilers, if they do not support built_inexpect the compiler will only see the #else for the macro thus the compiler would never see the __builtin_expect You can check in $PETSC_DIR/$PETSC_ARCH/include/petscconf.h and see if PETSC_HAVE_BUILTIN_EXPECT is defined. ./configure determines if this (and many other) features are supported by the compiler. It is conceivable that somehow configure determined incorrectly that this is supported. > On Jun 27, 2023, at 10:09 PM, ??? wrote: > > I've followed your advice and include the header's file and libraries in Visual Studio. Such "error" still shows but I can build the project! It's strange! > I expand the CHKERRQ macro and find the error actually locates at > > <2F07716E at 673DAE5F.6C969B6400000000.bmp> > > What I know from google is that the "__builtin_expect__" is defined in GCC, so is it unsolvable in Windows with visual studio C compiler or Inter C compiler? > ------------------ Original ------------------ > From: "Matthew Knepley"; > Date: Wed, Jun 28, 2023 01:59 AM > To: "???"; > Cc: "petsc-users"; > Subject: Re: [petsc-users] Problem in some macro when using VS+intel cl > > On Tue, Jun 27, 2023 at 11:32?AM ??? > wrote: >> Hi, >> >> After failure with MS-MPI once and once again, I tried icl+oneAPI and succeeded in installing and testing PESTc in Cygwin! >> >> However, (always however) when I copied the example code on Getting Started page on visual studio, there are tons of error like: >> >> I just wonder where the problem locates, I've googled this error message and it seems that it's induced by the difference of compilers, c.f. https://stackoverflow.com/questions/42136395/identifier-builtin-expect-is-undefined-during-ros-on-win-tutorial-talker-ex. But Intel says that they also provide such thing on icl, and I actually use this compiler instead of visual studio cl... > > The IDE is not showing the actual error message. Are you sure that your IDE build has the right includes and libraries? You can > get these using > > cd $PETSC_DIR > make getincludedirs > make getlinklibs > > Thanks, > > Matt > >> Anyway, the project could be built if I delete these error-checking macro. >> >> Installing feedback (or as a test result): >> When configure on windows, only icl + impi works, and in this case, both --with-cc and --with-cxx options need to point out the version like: --with-cc-std-c99 and --with-cxx-std-c++'ver'. Other combinations such as cl + impi, icl + msmpi, cl + msmpi never work. My tutor told me that older version of msmpi may work but I never try this. >> >> FENG. > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fengshw3 at mail2.sysu.edu.cn Tue Jun 27 22:30:07 2023 From: fengshw3 at mail2.sysu.edu.cn (=?utf-8?B?5Yav5LiK546u?=) Date: Wed, 28 Jun 2023 11:30:07 +0800 Subject: [petsc-users] Problem in some macro when using VS+intel cl In-Reply-To: <02C9D5D1-EAC5-423E-8CE6-35C39EBF11BE@petsc.dev> References: <02C9D5D1-EAC5-423E-8CE6-35C39EBF11BE@petsc.dev> Message-ID: This is EXACTLY the CRUX of the matter, with this precompile command, there is no more error! Thanks for your patience with my numerous and continuous questions. Je vous remercie !!      ------------------ Original ------------------ From:  "Barry Smith" From alexlindsay239 at gmail.com Wed Jun 28 13:37:48 2023 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Wed, 28 Jun 2023 11:37:48 -0700 Subject: [petsc-users] Scalable Solver for Incompressible Flow In-Reply-To: References: <87cz3i7fj1.fsf@jedbrown.org> <3287ff5f-5ac1-fdff-52d1-97888568c098@itwm.fraunhofer.de> <8735479bsg.fsf@jedbrown.org> <875y7ymzc2.fsf@jedbrown.org> <15FFDCF6-48C9-4331-A9FE-932BBDD418D1@lip6.fr> Message-ID: I do believe that based off the results in https://doi.org/10.1137/040608817 we should be able to make LSC, with proper scaling, compare very favorably with PCD On Tue, Jun 27, 2023 at 10:41?AM Alexander Lindsay wrote: > I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which > adds a couple more scaling applications of the inverse of the diagonal of A > > On Mon, Jun 26, 2023 at 6:06?PM Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> I guess that similar to the discussions about selfp, the approximation of >> the velocity mass matrix by the diagonal of the velocity sub-matrix will >> improve when running a transient as opposed to a steady calculation, >> especially if the time derivative is lumped.... Just thinking while typing >> >> On Mon, Jun 26, 2023 at 6:03?PM Alexander Lindsay < >> alexlindsay239 at gmail.com> wrote: >> >>> Returning to Sebastian's question about the correctness of the current >>> LSC implementation: in the taxonomy paper that Jed linked to (which talks >>> about SIMPLE, PCD, and LSC), equation 21 shows four applications of the >>> inverse of the velocity mass matrix. In the PETSc implementation there are >>> at most two applications of the reciprocal of the diagonal of A (an >>> approximation to the velocity mass matrix without more plumbing, as already >>> pointed out). It seems like for code implementations in which there are >>> possible scaling differences between the velocity and pressure equations, >>> that this difference in the number of inverse applications could be >>> significant? I know Jed said that these scalings wouldn't really matter if >>> you have a uniform grid, but I'm not 100% convinced yet. >>> >>> I might try fiddling around with adding two more reciprocal applications. >>> >>> On Fri, Jun 23, 2023 at 1:09?PM Pierre Jolivet >>> wrote: >>> >>>> >>>> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet >>>> wrote: >>>> >>>> >>>> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay >>>> wrote: >>>> >>>> Ah, I see that if I use Pierre's new 'full' option for >>>> -mat_schur_complement_ainv_type >>>> >>>> >>>> That was not initially done by me >>>> >>>> >>>> Oops, sorry for the noise, looks like it was done by me indeed >>>> in 9399e4fd88c6621aad8fe9558ce84df37bd6fada? >>>> >>>> Thanks, >>>> Pierre >>>> >>>> (though I recently tweaked MatSchurComplementComputeExplicitOperator() >>>> a bit to use KSPMatSolve(), so that if you have a small Schur complement ? >>>> which is not really the case for NS ? this could be a viable option, it was >>>> previously painfully slow). >>>> >>>> Thanks, >>>> Pierre >>>> >>>> that I get a single iteration for the Schur complement solve with LU. >>>> That's a nice testing option >>>> >>>> On Fri, Jun 23, 2023 at 12:02?PM Alexander Lindsay < >>>> alexlindsay239 at gmail.com> wrote: >>>> >>>>> I guess it is because the inverse of the diagonal form of A00 becomes >>>>> a poor representation of the inverse of A00? I guess naively I would have >>>>> thought that the blockdiag form of A00 is A00 >>>>> >>>>> On Fri, Jun 23, 2023 at 10:18?AM Alexander Lindsay < >>>>> alexlindsay239 at gmail.com> wrote: >>>>> >>>>>> Hi Jed, I will come back with answers to all of your questions at >>>>>> some point. I mostly just deal with MOOSE users who come to me and tell me >>>>>> their solve is converging slowly, asking me how to fix it. So I generally >>>>>> assume they have built an appropriate mesh and problem size for the problem >>>>>> they want to solve and added appropriate turbulence modeling (although my >>>>>> general assumption is often violated). >>>>>> >>>>>> > And to confirm, are you doing a nonlinearly implicit >>>>>> velocity-pressure solve? >>>>>> >>>>>> Yes, this is our default. >>>>>> >>>>>> A general question: it seems that it is well known that the quality >>>>>> of selfp degrades with increasing advection. Why is that? >>>>>> >>>>>> On Wed, Jun 7, 2023 at 8:01?PM Jed Brown wrote: >>>>>> >>>>>>> Alexander Lindsay writes: >>>>>>> >>>>>>> > This has been a great discussion to follow. Regarding >>>>>>> > >>>>>>> >> when time stepping, you have enough mass matrix that cheaper >>>>>>> preconditioners are good enough >>>>>>> > >>>>>>> > I'm curious what some algebraic recommendations might be for high >>>>>>> Re in >>>>>>> > transients. >>>>>>> >>>>>>> What mesh aspect ratio and streamline CFL number? Assuming your >>>>>>> model is turbulent, can you say anything about momentum thickness Reynolds >>>>>>> number Re_?? What is your wall normal spacing in plus units? (Wall resolved >>>>>>> or wall modeled?) >>>>>>> >>>>>>> And to confirm, are you doing a nonlinearly implicit >>>>>>> velocity-pressure solve? >>>>>>> >>>>>>> > I've found one-level DD to be ineffective when applied >>>>>>> monolithically or to the momentum block of a split, as it scales with the >>>>>>> mesh size. >>>>>>> >>>>>>> I wouldn't put too much weight on "scaling with mesh size" per se. >>>>>>> You want an efficient solver for the coarsest mesh that delivers sufficient >>>>>>> accuracy in your flow regime. Constants matter. >>>>>>> >>>>>>> Refining the mesh while holding time steps constant changes the >>>>>>> advective CFL number as well as cell Peclet/cell Reynolds numbers. A >>>>>>> meaningful scaling study is to increase Reynolds number (e.g., by growing >>>>>>> the domain) while keeping mesh size matched in terms of plus units in the >>>>>>> viscous sublayer and Kolmogorov length in the outer boundary layer. That >>>>>>> turns out to not be a very automatic study to do, but it's what matters and >>>>>>> you can spend a lot of time chasing ghosts with naive scaling studies. >>>>>>> >>>>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngocmaimonica.huynh at unipv.it Thu Jun 29 09:48:36 2023 From: ngocmaimonica.huynh at unipv.it (Ngoc Mai Monica Huynh) Date: Thu, 29 Jun 2023 16:48:36 +0200 Subject: [petsc-users] Fortran alternative for DMDAGetElements? Message-ID: <36F62697-071E-4F70-B19E-F37244F40B48@unipv.it> Hi everyone, I would need to use the routine DMDAGetElements() in our Fortran code. However, as I read from the manual, there is no Fortran support for this routine. Is there any similar alternative there? Many thanks! Best regards, Monica Huynh From bsmith at petsc.dev Thu Jun 29 11:38:25 2023 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 29 Jun 2023 12:38:25 -0400 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <36F62697-071E-4F70-B19E-F37244F40B48@unipv.it> References: <36F62697-071E-4F70-B19E-F37244F40B48@unipv.it> Message-ID: I can provide the Fortran interface this afternoon. Barry > On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh wrote: > > Hi everyone, > > I would need to use the routine DMDAGetElements() in our Fortran code. > However, as I read from the manual, there is no Fortran support for this routine. > Is there any similar alternative there? > > Many thanks! > Best regards, > Monica Huynh From ngocmaimonica.huynh at unipv.it Thu Jun 29 11:41:08 2023 From: ngocmaimonica.huynh at unipv.it (Ngoc Mai Monica Huynh) Date: Thu, 29 Jun 2023 18:41:08 +0200 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: References: <36F62697-071E-4F70-B19E-F37244F40B48@unipv.it> Message-ID: <4CE1CFBE-E4BC-4E3A-B695-50813AD2BDB1@unipv.it> That would be amazing, thank you very much! Monica > On 29 Jun 2023, at 18:38, Barry Smith wrote: > > > I can provide the Fortran interface this afternoon. > > Barry > > >> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh wrote: >> >> Hi everyone, >> >> I would need to use the routine DMDAGetElements() in our Fortran code. >> However, as I read from the manual, there is no Fortran support for this routine. >> Is there any similar alternative there? >> >> Many thanks! >> Best regards, >> Monica Huynh > From bsmith at petsc.dev Thu Jun 29 13:17:30 2023 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 29 Jun 2023 14:17:30 -0400 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <4CE1CFBE-E4BC-4E3A-B695-50813AD2BDB1@unipv.it> References: <36F62697-071E-4F70-B19E-F37244F40B48@unipv.it> <4CE1CFBE-E4BC-4E3A-B695-50813AD2BDB1@unipv.it> Message-ID: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> The code is ready in the branch barry/2023-06-29/add-dmdagetelements-fortran https://gitlab.com/petsc/petsc/-/merge_requests/6647 Barry > On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh wrote: > > That would be amazing, thank you very much! > Monica > >> On 29 Jun 2023, at 18:38, Barry Smith wrote: >> >> >> I can provide the Fortran interface this afternoon. >> >> Barry >> >> >>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh wrote: >>> >>> Hi everyone, >>> >>> I would need to use the routine DMDAGetElements() in our Fortran code. >>> However, as I read from the manual, there is no Fortran support for this routine. >>> Is there any similar alternative there? >>> >>> Many thanks! >>> Best regards, >>> Monica Huynh >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngocmaimonica.huynh at unipv.it Thu Jun 29 13:59:21 2023 From: ngocmaimonica.huynh at unipv.it (Ngoc Mai Monica Huynh) Date: Thu, 29 Jun 2023 20:59:21 +0200 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> Message-ID: <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> Thank you. Does this mean that DMDARestoreElements() is supported as well now? Monica > Il giorno 29 giu 2023, alle ore 20:17, Barry Smith ha scritto: > > ? > > The code is ready in the branch barry/2023-06-29/add-dmdagetelements-fortran https://gitlab.com/petsc/petsc/-/merge_requests/6647 > > Barry > > >> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh wrote: >> >> That would be amazing, thank you very much! >> Monica >> >>> On 29 Jun 2023, at 18:38, Barry Smith wrote: >>> >>> >>> I can provide the Fortran interface this afternoon. >>> >>> Barry >>> >>> >>>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh wrote: >>>> >>>> Hi everyone, >>>> >>>> I would need to use the routine DMDAGetElements() in our Fortran code. >>>> However, as I read from the manual, there is no Fortran support for this routine. >>>> Is there any similar alternative there? >>>> >>>> Many thanks! >>>> Best regards, >>>> Monica Huynh >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jun 29 14:09:58 2023 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 29 Jun 2023 15:09:58 -0400 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> Message-ID: On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh < ngocmaimonica.huynh at unipv.it> wrote: > Thank you. > Does this mean that DMDARestoreElements() is supported as well now? > Yes. Thanks, Matt > Monica > > > Il giorno 29 giu 2023, alle ore 20:17, Barry Smith ha > scritto: > > ? > > The code is ready in the branch > *barry/2023-06-29/add-dmdagetelements-fortran * > https://gitlab.com/petsc/petsc/-/merge_requests/6647 > > Barry > > > On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh < > ngocmaimonica.huynh at unipv.it> wrote: > > That would be amazing, thank you very much! > Monica > > On 29 Jun 2023, at 18:38, Barry Smith wrote: > > > I can provide the Fortran interface this afternoon. > > Barry > > > On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh < > ngocmaimonica.huynh at unipv.it> wrote: > > Hi everyone, > > I would need to use the routine DMDAGetElements() in our Fortran code. > However, as I read from the manual, there is no Fortran support for this > routine. > Is there any similar alternative there? > > Many thanks! > Best regards, > Monica Huynh > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cho at slac.stanford.edu Thu Jun 29 18:50:10 2023 From: cho at slac.stanford.edu (Ng, Cho-Kuen) Date: Thu, 29 Jun 2023 23:50:10 +0000 Subject: [petsc-users] Using PETSc GPU backend Message-ID: I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend? Thanks, Cho -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Jun 29 19:55:08 2023 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 29 Jun 2023 20:55:08 -0400 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: References: Message-ID: Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0. The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left. Mark On Thu, Jun 29, 2023 at 7:50?PM Ng, Cho-Kuen via petsc-users < petsc-users at mcs.anl.gov> wrote: > I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and > used it by "spack load petsc/fwge6pf". Then I compiled the application > code (purely CPU code) linking to the petsc package, hoping that I can get > performance improvement using the petsc GPU backend. However, the timing > was the same using the same number of MPI tasks with and without GPU > accelerators. Have I missed something in the process, for example, setting > up PETSc options at runtime to use the GPU backend? > > Thanks, > Cho > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cho at slac.stanford.edu Thu Jun 29 22:32:14 2023 From: cho at slac.stanford.edu (Ng, Cho-Kuen) Date: Fri, 30 Jun 2023 03:32:14 +0000 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: References: Message-ID: Mark, Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options? Cho ________________________________ From: Mark Adams Sent: Thursday, June 29, 2023 5:55 PM To: Ng, Cho-Kuen Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Using PETSc GPU backend Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0. The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left. Mark On Thu, Jun 29, 2023 at 7:50?PM Ng, Cho-Kuen via petsc-users > wrote: I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend? Thanks, Cho -------------- next part -------------- An HTML attachment was scrubbed... URL: From cho at slac.stanford.edu Fri Jun 30 00:12:21 2023 From: cho at slac.stanford.edu (Ng, Cho-Kuen) Date: Fri, 30 Jun 2023 05:12:21 +0000 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: References: Message-ID: Mark, The application code reads in parameters from an input file, where we can put the PETSc runtime options. Then we pass the options to PetscInitialize(...). Does that sounds right? Cho ________________________________ From: Ng, Cho-Kuen Sent: Thursday, June 29, 2023 8:32 PM To: Mark Adams Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Using PETSc GPU backend Mark, Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options? Cho ________________________________ From: Mark Adams Sent: Thursday, June 29, 2023 5:55 PM To: Ng, Cho-Kuen Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Using PETSc GPU backend Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0. The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left. Mark On Thu, Jun 29, 2023 at 7:50?PM Ng, Cho-Kuen via petsc-users > wrote: I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend? Thanks, Cho -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 30 05:16:32 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 30 Jun 2023 06:16:32 -0400 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: References: Message-ID: On Fri, Jun 30, 2023 at 1:13?AM Ng, Cho-Kuen via petsc-users < petsc-users at mcs.anl.gov> wrote: > Mark, > > The application code reads in parameters from an input file, where we can > put the PETSc runtime options. Then we pass the options to > PetscInitialize(...). Does that sounds right? > PETSc will read command line argument automatically in PetscInitialize() unless you shut it off. Thanks, Matt > Cho > ------------------------------ > *From:* Ng, Cho-Kuen > *Sent:* Thursday, June 29, 2023 8:32 PM > *To:* Mark Adams > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Using PETSc GPU backend > > Mark, > > Thanks for the information. How do I put the runtime options for the > executable, say, a.out, which does not have the provision to append > arguments? Do I need to change the C++ main to read in the options? > > Cho > ------------------------------ > *From:* Mark Adams > *Sent:* Thursday, June 29, 2023 5:55 PM > *To:* Ng, Cho-Kuen > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Using PETSc GPU backend > > Run with options: -mat_type aijcusparse -vec_type cuda -log_view > -options_left > > The last column of the performance data (from -log_view) will be the > percent flops on the GPU. Check that that is > 0. > > The end of the output will list the options that were used and options > that were _not_ used (if any). Check that there are no options left. > > Mark > > On Thu, Jun 29, 2023 at 7:50?PM Ng, Cho-Kuen via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and > used it by "spack load petsc/fwge6pf". Then I compiled the application > code (purely CPU code) linking to the petsc package, hoping that I can get > performance improvement using the petsc GPU backend. However, the timing > was the same using the same number of MPI tasks with and without GPU > accelerators. Have I missed something in the process, for example, setting > up PETSc options at runtime to use the GPU backend? > > Thanks, > Cho > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngocmaimonica.huynh at unipv.it Fri Jun 30 05:47:03 2023 From: ngocmaimonica.huynh at unipv.it (Ngoc Mai Monica Huynh) Date: Fri, 30 Jun 2023 12:47:03 +0200 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> Message-ID: <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> Hi, I have no problem now in compiling, thank you for providing the Fortran interface. I have a follow up question. When running the code, I get this error, which I?m pretty sure it is related to DMDAGetElements(), since up to that line everything works fine. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: No error traceback is available, the problem could be in the main program. -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. ????????????????????????????????????? The lines of code I?m working on are the following: integer ierr MPI_Comm comm DM da3d ISLocalToGlobalMapping map PetscInt nel,nen PetscInt, pointer :: e_loc(:) call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, & 8,2,1,3,1,PETSC_NULL_INTEGER, & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, & da3d,ierr) call DMSetMatType(da3d,MATIS,ierr) call DMSetFromOptions(da3d,ierr) call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) call DMSetUp(da3d,ierr) call DMGetLocalToGlobalMapping(da3d,map,ierr) call DMDAGetElements(da3d,nel,nen,e_loc,ierr) By printing in a dummy way any kind of message before and after DMDAGetElements(), I cannot pass over it. Unfortunately, I cannot run with the debug option on this machine. Am I calling the routine in a wrong way? Thanks, Monica > On 29 Jun 2023, at 21:09, Matthew Knepley wrote: > > On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh > wrote: > Thank you. > Does this mean that DMDARestoreElements() is supported as well now? > > Yes. > > Thanks, > > Matt > > Monica > > >> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith > ha scritto: >> >> ? >> >> The code is ready in the branch barry/2023-06-29/add-dmdagetelements-fortran https://gitlab.com/petsc/petsc/-/merge_requests/6647 >> >> Barry >> >> >>> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh > wrote: >>> >>> That would be amazing, thank you very much! >>> Monica >>> >>>> On 29 Jun 2023, at 18:38, Barry Smith > wrote: >>>> >>>> >>>> I can provide the Fortran interface this afternoon. >>>> >>>> Barry >>>> >>>> >>>>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh > wrote: >>>>> >>>>> Hi everyone, >>>>> >>>>> I would need to use the routine DMDAGetElements() in our Fortran code. >>>>> However, as I read from the manual, there is no Fortran support for this routine. >>>>> Is there any similar alternative there? >>>>> >>>>> Many thanks! >>>>> Best regards, >>>>> Monica Huynh >>>> >>> >> > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 30 05:50:40 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 30 Jun 2023 06:50:40 -0400 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> Message-ID: On Fri, Jun 30, 2023 at 6:47?AM Ngoc Mai Monica Huynh < ngocmaimonica.huynh at unipv.it> wrote: > Hi, > > I have no problem now in compiling, thank you for providing the Fortran > interface. > I have a follow up question. > When running the code, I get this error, which I?m pretty sure it is > related to DMDAGetElements(), since up to that line everything works fine. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and > https://petsc.org/release/faq/ > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: No error traceback is available, the problem could be in > the main program. > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > ????????????????????????????????????? > > The lines of code I?m working on are the following: > > integer ierr > > MPI_Comm comm > DM da3d > ISLocalToGlobalMapping map > PetscInt nel,nen > PetscInt, pointer :: e_loc(:) > > call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, > & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, > & 8,2,1,3,1,PETSC_NULL_INTEGER, > & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, > & da3d,ierr) > call DMSetMatType(da3d,MATIS,ierr) > call DMSetFromOptions(da3d,ierr) > call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) > call DMSetUp(da3d,ierr) > call DMGetLocalToGlobalMapping(da3d,map,ierr) > > call DMDAGetElements(da3d,nel,nen,e_loc,ierr) > > By printing in a dummy way any kind of message before and after > DMDAGetElements(), I cannot pass over it. > Unfortunately, I cannot run with the debug option on this machine. > Am I calling the routine in a wrong way? > Does src/dm/tutorials/ex11f90.F90 run for you? Thanks, Matt > Thanks, > Monica > > > On 29 Jun 2023, at 21:09, Matthew Knepley wrote: > > On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh < > ngocmaimonica.huynh at unipv.it> wrote: > >> Thank you. >> Does this mean that DMDARestoreElements() is supported as well now? >> > > Yes. > > Thanks, > > Matt > > >> Monica >> >> >> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith ha >> scritto: >> >> ? >> >> The code is ready in the branch >> *barry/2023-06-29/add-dmdagetelements-fortran * >> https://gitlab.com/petsc/petsc/-/merge_requests/6647 >> >> Barry >> >> >> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh < >> ngocmaimonica.huynh at unipv.it> wrote: >> >> That would be amazing, thank you very much! >> Monica >> >> On 29 Jun 2023, at 18:38, Barry Smith wrote: >> >> >> I can provide the Fortran interface this afternoon. >> >> Barry >> >> >> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh < >> ngocmaimonica.huynh at unipv.it> wrote: >> >> Hi everyone, >> >> I would need to use the routine DMDAGetElements() in our Fortran code. >> However, as I read from the manual, there is no Fortran support for this >> routine. >> Is there any similar alternative there? >> >> Many thanks! >> Best regards, >> Monica Huynh >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl-johan.thore at liu.se Fri Jun 30 07:21:55 2023 From: carl-johan.thore at liu.se (Carl-Johan Thore) Date: Fri, 30 Jun 2023 12:21:55 +0000 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> Message-ID: Hi, I'm trying to run an iterative solver (FGMRES for example) with PCMG as preconditioner. The setup of PCMG is done roughly as in ex42 of the PETSc-tutorials (https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). Since I have many locked degrees-of-freedom I would like to use PCREDISTRIBUTE. However, this results in (30039 is the number of DOFs after redistribute and 55539 the number before): [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Nonconforming object sizes [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for MatProductType PtAP: A 30039x30039, P 55539x7803 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 GIT Date: 2023-04-24 16:37:00 +0200 [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 --download-scalapack --download-hdf5 --download-zlib --download-mumps --download-parmetis --download-metis --download-ptscotch --download-hypre --download-spai [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 [0]PETSC ERROR: #2 MatProductSetFromOptions() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 [0]PETSC ERROR: #3 MatPtAP() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 [0]PETSC ERROR: #4 MatGalerkin() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 [0]PETSC ERROR: #5 PCSetUp_MG() at /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 [0]PETSC ERROR: #6 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 [0]PETSC ERROR: #7 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 [0]PETSC ERROR: #8 PCSetUp_Redistribute() at /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 [0]PETSC ERROR: #9 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 [0]PETSC ERROR: #10 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 [0]PETSC ERROR: #11 KSPSolve_Private() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 [0]PETSC ERROR: #12 KSPSolve() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 It?s clear what happens I think, and it kind of make since not all levels are redistributed as they should (?). Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? Kind regards, Carl-Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngocmaimonica.huynh at unipv.it Fri Jun 30 07:38:38 2023 From: ngocmaimonica.huynh at unipv.it (Ngoc Mai Monica Huynh) Date: Fri, 30 Jun 2023 14:38:38 +0200 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> Message-ID: <7B3693E0-F782-43A7-BA8C-DFD152471DEE@unipv.it> Yes, it compiles and run correctly Monica > On 30 Jun 2023, at 12:50, Matthew Knepley wrote: > > On Fri, Jun 30, 2023 at 6:47?AM Ngoc Mai Monica Huynh > wrote: > Hi, > > I have no problem now in compiling, thank you for providing the Fortran interface. > I have a follow up question. > When running the code, I get this error, which I?m pretty sure it is related to DMDAGetElements(), since up to that line everything works fine. > > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/ > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: No error traceback is available, the problem could be in the main program. > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > ????????????????????????????????????? > > The lines of code I?m working on are the following: > > integer ierr > > MPI_Comm comm > DM da3d > ISLocalToGlobalMapping map > PetscInt nel,nen > PetscInt, pointer :: e_loc(:) > > call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, > & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, > & 8,2,1,3,1,PETSC_NULL_INTEGER, > & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, > & da3d,ierr) > call DMSetMatType(da3d,MATIS,ierr) > call DMSetFromOptions(da3d,ierr) > call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) > call DMSetUp(da3d,ierr) > call DMGetLocalToGlobalMapping(da3d,map,ierr) > > call DMDAGetElements(da3d,nel,nen,e_loc,ierr) > > By printing in a dummy way any kind of message before and after DMDAGetElements(), I cannot pass over it. > Unfortunately, I cannot run with the debug option on this machine. > Am I calling the routine in a wrong way? > > Does > > src/dm/tutorials/ex11f90.F90 > > run for you? > > Thanks, > > Matt > > Thanks, > Monica > > >> On 29 Jun 2023, at 21:09, Matthew Knepley > wrote: >> >> On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh > wrote: >> Thank you. >> Does this mean that DMDARestoreElements() is supported as well now? >> >> Yes. >> >> Thanks, >> >> Matt >> >> Monica >> >> >>> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith > ha scritto: >>> >>> ? >>> >>> The code is ready in the branch barry/2023-06-29/add-dmdagetelements-fortran https://gitlab.com/petsc/petsc/-/merge_requests/6647 >>> >>> Barry >>> >>> >>>> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh > wrote: >>>> >>>> That would be amazing, thank you very much! >>>> Monica >>>> >>>>> On 29 Jun 2023, at 18:38, Barry Smith > wrote: >>>>> >>>>> >>>>> I can provide the Fortran interface this afternoon. >>>>> >>>>> Barry >>>>> >>>>> >>>>>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh > wrote: >>>>>> >>>>>> Hi everyone, >>>>>> >>>>>> I would need to use the routine DMDAGetElements() in our Fortran code. >>>>>> However, as I read from the manual, there is no Fortran support for this routine. >>>>>> Is there any similar alternative there? >>>>>> >>>>>> Many thanks! >>>>>> Best regards, >>>>>> Monica Huynh >>>>> >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 30 08:08:30 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 30 Jun 2023 09:08:30 -0400 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <7B3693E0-F782-43A7-BA8C-DFD152471DEE@unipv.it> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <7B3693E0-F782-43A7-BA8C-DFD152471DEE@unipv.it> Message-ID: On Fri, Jun 30, 2023 at 8:38?AM Ngoc Mai Monica Huynh < ngocmaimonica.huynh at unipv.it> wrote: > Yes, it compiles and run correctly > Okay, then we try to alter that example until it looks like your test. One thing is the #include at the top. Do you have that in your code? If Fortran does not find the interface, then it will just SEGV. Thanks, Matt > Monica > > On 30 Jun 2023, at 12:50, Matthew Knepley wrote: > > On Fri, Jun 30, 2023 at 6:47?AM Ngoc Mai Monica Huynh < > ngocmaimonica.huynh at unipv.it> wrote: > >> Hi, >> >> I have no problem now in compiling, thank you for providing the Fortran >> interface. >> I have a follow up question. >> When running the code, I get this error, which I?m pretty sure it is >> related to DMDAGetElements(), since up to that line everything works fine. >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and >> https://petsc.org/release/faq/ >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [0]PETSC ERROR: No error traceback is available, the problem could be in >> the main program. >> -------------------------------------------------------------------------- >> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >> with errorcode 59. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> ????????????????????????????????????? >> >> The lines of code I?m working on are the following: >> >> integer ierr >> >> MPI_Comm comm >> DM da3d >> ISLocalToGlobalMapping map >> PetscInt nel,nen >> PetscInt, pointer :: e_loc(:) >> >> call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >> & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, >> & 8,2,1,3,1,PETSC_NULL_INTEGER, >> & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, >> & da3d,ierr) >> call DMSetMatType(da3d,MATIS,ierr) >> call DMSetFromOptions(da3d,ierr) >> call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) >> call DMSetUp(da3d,ierr) >> call DMGetLocalToGlobalMapping(da3d,map,ierr) >> >> call DMDAGetElements(da3d,nel,nen,e_loc,ierr) >> >> By printing in a dummy way any kind of message before and after >> DMDAGetElements(), I cannot pass over it. >> Unfortunately, I cannot run with the debug option on this machine. >> Am I calling the routine in a wrong way? >> > > Does > > src/dm/tutorials/ex11f90.F90 > > run for you? > > Thanks, > > Matt > > >> Thanks, >> Monica >> >> >> On 29 Jun 2023, at 21:09, Matthew Knepley wrote: >> >> On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh < >> ngocmaimonica.huynh at unipv.it> wrote: >> >>> Thank you. >>> Does this mean that DMDARestoreElements() is supported as well now? >>> >> >> Yes. >> >> Thanks, >> >> Matt >> >> >>> Monica >>> >>> >>> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith >>> ha scritto: >>> >>> ? >>> >>> The code is ready in the branch >>> *barry/2023-06-29/add-dmdagetelements-fortran * >>> https://gitlab.com/petsc/petsc/-/merge_requests/6647 >>> >>> Barry >>> >>> >>> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh < >>> ngocmaimonica.huynh at unipv.it> wrote: >>> >>> That would be amazing, thank you very much! >>> Monica >>> >>> On 29 Jun 2023, at 18:38, Barry Smith wrote: >>> >>> >>> I can provide the Fortran interface this afternoon. >>> >>> Barry >>> >>> >>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh < >>> ngocmaimonica.huynh at unipv.it> wrote: >>> >>> Hi everyone, >>> >>> I would need to use the routine DMDAGetElements() in our Fortran code. >>> However, as I read from the manual, there is no Fortran support for this >>> routine. >>> Is there any similar alternative there? >>> >>> Many thanks! >>> Best regards, >>> Monica Huynh >>> >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 30 08:17:51 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 30 Jun 2023 09:17:51 -0400 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> Message-ID: <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> ex42.c provides directly the interpolation/restriction needed to move between levels in the loop for (k = 1; k < nlevels; k++) { PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL)); PetscCall(PCMGSetInterpolation(pc, k, R)); PetscCall(MatDestroy(&R)); } The more standard alternative to this is to call KSPSetDM() and have the PCMG setup use the DM to construct the interpolations (I don't know why ex42.c does this construction itself instead of having the KSPSetDM() process handle it but that doesn't matter). The end result is the same in both cases. Since PCREDISTRIBUTE builds its own new matrix (by using only certain rows and columns of the original matrix) the original interpolation cannot be used for two reasons 1) (since it is for the full system) It is for the wrong problem. 2) In addition, if you ran with ex42.c the inner KSP does not have access to the interpolation that was constructed so you could not get PCMG to to work as indicated below. I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1. So the short answer is that there is no "canned" way to use the PCMG process trivially with PCDISTRIBUTE. To do what you want requires two additional steps 1) after you construct the full interpolation matrix (by using the DM) you need to remove the rows associated with the dof that have been removed by the "locked" variables (and the columns that are associated with coarse grid points that live on the removed points) so that the interpolation is the correct "size" for the smaller problem 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI processes for load balancing after it has removed the locked variables you would need to do the exact same movement for the rows of the interpolation matrix that you have constructed (after you have removed the "locked" rows of the interpolation. Lots of bookkeeping to acheive 1 and 2 but conceptually simple. As an experiment you can try using PCGAMG on the redistributed matrix -redistribute_pc_type gamg to use algebraic multigrid just to see the time and convergence rates. Since GAMG creates its own interpolation based on the matrix and it will be built on the smaller redistributed matrix there will no issue with the wrong "sized" interpolation. Of course you have the overhead of algebraic multigrid and cannot take advantage of geometric multigrid. The GAMG approach may be satisfactory to your needs. If you are game for looking more closely at using redistribute with geometric multigrid and PETSc (which will require digging into PETSc source code and using internal information in the PETSc source code) you can start by looking at how we solve variational problems with SNES using reduced space active set methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c This code solves problem 1 see DMSetVI() it builds the entire interpolation and then pulls out the required non-locked part. Reduced space active set methods essentially lock the constrained dof and solve a smaller system without those dof at each iteration. But it does not solve problem 2. Moving the rows of the "smaller" interpolation to the correct MPI process based on where PCREDISTRIBUTE moved rows. To do this would requring looking at the PCREDISTRUBUTE code and extracting the information of where each row was moving and performing the process for the interpolation matrix. src/ksp/pc/impls/redistribute/redistribute.c Barry > On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users wrote: > > Hi, > > I'm trying to run an iterative solver (FGMRES for example) with PCMG as preconditioner. The setup of PCMG > is done roughly as in ex42 of the PETSc-tutorials (https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). > Since I have many locked degrees-of-freedom I would like to use PCREDISTRIBUTE. However, this > results in (30039 is the number of DOFs after redistribute and 55539 the number before): > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Nonconforming object sizes > [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for MatProductType PtAP: A 30039x30039, P 55539x7803 > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 GIT Date: 2023-04-24 16:37:00 +0200 > [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 > [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 --download-scalapack --download-hdf5 --download-zlib --download-mumps --download-parmetis --download-metis --download-ptscotch --download-hypre --download-spai > [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 > [0]PETSC ERROR: #2 MatProductSetFromOptions() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 > [0]PETSC ERROR: #3 MatPtAP() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 > [0]PETSC ERROR: #4 MatGalerkin() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 > [0]PETSC ERROR: #5 PCSetUp_MG() at /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 > [0]PETSC ERROR: #6 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 > [0]PETSC ERROR: #7 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 > [0]PETSC ERROR: #8 PCSetUp_Redistribute() at /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 > [0]PETSC ERROR: #9 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 > [0]PETSC ERROR: #10 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 > [0]PETSC ERROR: #11 KSPSolve_Private() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 > [0]PETSC ERROR: #12 KSPSolve() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 > > It?s clear what happens I think, and it kind of make since not all levels are redistributed as they should (?). > Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? > > Kind regards, > Carl-Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngocmaimonica.huynh at unipv.it Fri Jun 30 08:21:26 2023 From: ngocmaimonica.huynh at unipv.it (Ngoc Mai Monica Huynh) Date: Fri, 30 Jun 2023 15:21:26 +0200 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <7B3693E0-F782-43A7-BA8C-DFD152471DEE@unipv.it> Message-ID: <25C8D9F5-5CB4-45F5-A0FA-74B383E5C08B@unipv.it> Yes, I have the #include at the top of the code. Thank you very much for your help. I?ll let you know if I have any improvements from my side. Looking forward to hearing from you. Thanks, Monica > On 30 Jun 2023, at 15:08, Matthew Knepley wrote: > > On Fri, Jun 30, 2023 at 8:38?AM Ngoc Mai Monica Huynh > wrote: > Yes, it compiles and run correctly > > Okay, then we try to alter that example until it looks like your test. > > One thing is the #include at the top. Do you have that in your code? If Fortran does not find the interface, > then it will just SEGV. > > Thanks, > > Matt > > Monica > >> On 30 Jun 2023, at 12:50, Matthew Knepley > wrote: >> >> On Fri, Jun 30, 2023 at 6:47?AM Ngoc Mai Monica Huynh > wrote: >> Hi, >> >> I have no problem now in compiling, thank you for providing the Fortran interface. >> I have a follow up question. >> When running the code, I get this error, which I?m pretty sure it is related to DMDAGetElements(), since up to that line everything works fine. >> >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/ >> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [0]PETSC ERROR: No error traceback is available, the problem could be in the main program. >> -------------------------------------------------------------------------- >> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >> with errorcode 59. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> ????????????????????????????????????? >> >> The lines of code I?m working on are the following: >> >> integer ierr >> >> MPI_Comm comm >> DM da3d >> ISLocalToGlobalMapping map >> PetscInt nel,nen >> PetscInt, pointer :: e_loc(:) >> >> call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >> & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, >> & 8,2,1,3,1,PETSC_NULL_INTEGER, >> & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, >> & da3d,ierr) >> call DMSetMatType(da3d,MATIS,ierr) >> call DMSetFromOptions(da3d,ierr) >> call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) >> call DMSetUp(da3d,ierr) >> call DMGetLocalToGlobalMapping(da3d,map,ierr) >> >> call DMDAGetElements(da3d,nel,nen,e_loc,ierr) >> >> By printing in a dummy way any kind of message before and after DMDAGetElements(), I cannot pass over it. >> Unfortunately, I cannot run with the debug option on this machine. >> Am I calling the routine in a wrong way? >> >> Does >> >> src/dm/tutorials/ex11f90.F90 >> >> run for you? >> >> Thanks, >> >> Matt >> >> Thanks, >> Monica >> >> >>> On 29 Jun 2023, at 21:09, Matthew Knepley > wrote: >>> >>> On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh > wrote: >>> Thank you. >>> Does this mean that DMDARestoreElements() is supported as well now? >>> >>> Yes. >>> >>> Thanks, >>> >>> Matt >>> >>> Monica >>> >>> >>>> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith > ha scritto: >>>> >>>> ? >>>> >>>> The code is ready in the branch barry/2023-06-29/add-dmdagetelements-fortran https://gitlab.com/petsc/petsc/-/merge_requests/6647 >>>> >>>> Barry >>>> >>>> >>>>> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh > wrote: >>>>> >>>>> That would be amazing, thank you very much! >>>>> Monica >>>>> >>>>>> On 29 Jun 2023, at 18:38, Barry Smith > wrote: >>>>>> >>>>>> >>>>>> I can provide the Fortran interface this afternoon. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh > wrote: >>>>>>> >>>>>>> Hi everyone, >>>>>>> >>>>>>> I would need to use the routine DMDAGetElements() in our Fortran code. >>>>>>> However, as I read from the manual, there is no Fortran support for this routine. >>>>>>> Is there any similar alternative there? >>>>>>> >>>>>>> Many thanks! >>>>>>> Best regards, >>>>>>> Monica Huynh >>>>>> >>>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Jun 30 08:30:02 2023 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 30 Jun 2023 09:30:02 -0400 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: References: Message-ID: PetscCall(PetscInitialize(&argc, &argv, NULL, help)); gives us the args and you run: a.out -mat_type aijcusparse -vec_type cuda -log_view -options_left Mark On Fri, Jun 30, 2023 at 6:16?AM Matthew Knepley wrote: > On Fri, Jun 30, 2023 at 1:13?AM Ng, Cho-Kuen via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Mark, >> >> The application code reads in parameters from an input file, where we can >> put the PETSc runtime options. Then we pass the options to >> PetscInitialize(...). Does that sounds right? >> > > PETSc will read command line argument automatically in PetscInitialize() > unless you shut it off. > > Thanks, > > Matt > > >> Cho >> ------------------------------ >> *From:* Ng, Cho-Kuen >> *Sent:* Thursday, June 29, 2023 8:32 PM >> *To:* Mark Adams >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Using PETSc GPU backend >> >> Mark, >> >> Thanks for the information. How do I put the runtime options for the >> executable, say, a.out, which does not have the provision to append >> arguments? Do I need to change the C++ main to read in the options? >> >> Cho >> ------------------------------ >> *From:* Mark Adams >> *Sent:* Thursday, June 29, 2023 5:55 PM >> *To:* Ng, Cho-Kuen >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] Using PETSc GPU backend >> >> Run with options: -mat_type aijcusparse -vec_type cuda -log_view >> -options_left >> >> The last column of the performance data (from -log_view) will be the >> percent flops on the GPU. Check that that is > 0. >> >> The end of the output will list the options that were used and options >> that were _not_ used (if any). Check that there are no options left. >> >> Mark >> >> On Thu, Jun 29, 2023 at 7:50?PM Ng, Cho-Kuen via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and >> used it by "spack load petsc/fwge6pf". Then I compiled the application >> code (purely CPU code) linking to the petsc package, hoping that I can get >> performance improvement using the petsc GPU backend. However, the timing >> was the same using the same number of MPI tasks with and without GPU >> accelerators. Have I missed something in the process, for example, setting >> up PETSc options at runtime to use the GPU backend? >> >> Thanks, >> Cho >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 30 08:54:39 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 30 Jun 2023 09:54:39 -0400 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <25C8D9F5-5CB4-45F5-A0FA-74B383E5C08B@unipv.it> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <7B3693E0-F782-43A7-BA8C-DFD152471DEE@unipv.it> <25C8D9F5-5CB4-45F5-A0FA-74B383E5C08B@unipv.it> Message-ID: I glued your code fragment into a stand-alone program and it runs fine for me on 16 ranks. Does this simple program run for you? program main #include use petsc implicit none integer ierr MPI_Comm comm DM da3d ISLocalToGlobalMapping map PetscInt nel,nen PetscInt, pointer :: e_loc(:) PetscCallA(PetscInitialize(ierr)) comm = PETSC_COMM_WORLD call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, & & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, & & 8,2,1,3,1,PETSC_NULL_INTEGER, & & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, & & da3d,ierr) call DMSetMatType(da3d,MATIS,ierr) call DMSetFromOptions(da3d,ierr) call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) call DMSetUp(da3d,ierr) call DMGetLocalToGlobalMapping(da3d,map,ierr) call DMDAGetElements(da3d,nel,nen,e_loc,ierr) call DMDARestoreElements(da3d,nel,nen,e_loc,ierr) PetscCallA(DMDestroy(da3d,ierr)) PetscCallA(PetscFinalize(ierr) > On Jun 30, 2023, at 9:21 AM, Ngoc Mai Monica Huynh wrote: > > Yes, I have the #include at the top of the code. > > Thank you very much for your help. > I?ll let you know if I have any improvements from my side. > Looking forward to hearing from you. > > Thanks, > Monica > >> On 30 Jun 2023, at 15:08, Matthew Knepley > wrote: >> >> On Fri, Jun 30, 2023 at 8:38?AM Ngoc Mai Monica Huynh > wrote: >>> Yes, it compiles and run correctly >> >> Okay, then we try to alter that example until it looks like your test. >> >> One thing is the #include at the top. Do you have that in your code? If Fortran does not find the interface, >> then it will just SEGV. >> >> Thanks, >> >> Matt >> >>> Monica >>> >>>> On 30 Jun 2023, at 12:50, Matthew Knepley > wrote: >>>> >>>> On Fri, Jun 30, 2023 at 6:47?AM Ngoc Mai Monica Huynh > wrote: >>>>> Hi, >>>>> >>>>> I have no problem now in compiling, thank you for providing the Fortran interface. >>>>> I have a follow up question. >>>>> When running the code, I get this error, which I?m pretty sure it is related to DMDAGetElements(), since up to that line everything works fine. >>>>> >>>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>>> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/ >>>>> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>>>> [0]PETSC ERROR: No error traceback is available, the problem could be in the main program. >>>>> -------------------------------------------------------------------------- >>>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >>>>> with errorcode 59. >>>>> >>>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >>>>> You may or may not see output from other processes, depending on >>>>> exactly when Open MPI kills them. >>>>> ????????????????????????????????????? >>>>> >>>>> The lines of code I?m working on are the following: >>>>> >>>>> integer ierr >>>>> >>>>> MPI_Comm comm >>>>> DM da3d >>>>> ISLocalToGlobalMapping map >>>>> PetscInt nel,nen >>>>> PetscInt, pointer :: e_loc(:) >>>>> >>>>> call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >>>>> & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, >>>>> & 8,2,1,3,1,PETSC_NULL_INTEGER, >>>>> & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, >>>>> & da3d,ierr) >>>>> call DMSetMatType(da3d,MATIS,ierr) >>>>> call DMSetFromOptions(da3d,ierr) >>>>> call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) >>>>> call DMSetUp(da3d,ierr) >>>>> call DMGetLocalToGlobalMapping(da3d,map,ierr) >>>>> >>>>> call DMDAGetElements(da3d,nel,nen,e_loc,ierr) >>>>> >>>>> By printing in a dummy way any kind of message before and after DMDAGetElements(), I cannot pass over it. >>>>> Unfortunately, I cannot run with the debug option on this machine. >>>>> Am I calling the routine in a wrong way? >>>> >>>> Does >>>> >>>> src/dm/tutorials/ex11f90.F90 >>>> >>>> run for you? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>>> Thanks, >>>>> Monica >>>>> >>>>> >>>>>> On 29 Jun 2023, at 21:09, Matthew Knepley > wrote: >>>>>> >>>>>> On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh > wrote: >>>>>>> Thank you. >>>>>>> Does this mean that DMDARestoreElements() is supported as well now? >>>>>> >>>>>> Yes. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>>> Monica >>>>>>> >>>>>>> >>>>>>>> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith > ha scritto: >>>>>>>> >>>>>>>> ? >>>>>>>> >>>>>>>> The code is ready in the branch barry/2023-06-29/add-dmdagetelements-fortran https://gitlab.com/petsc/petsc/-/merge_requests/6647 >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> >>>>>>>>> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh > wrote: >>>>>>>>> >>>>>>>>> That would be amazing, thank you very much! >>>>>>>>> Monica >>>>>>>>> >>>>>>>>>> On 29 Jun 2023, at 18:38, Barry Smith > wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I can provide the Fortran interface this afternoon. >>>>>>>>>> >>>>>>>>>> Barry >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh > wrote: >>>>>>>>>>> >>>>>>>>>>> Hi everyone, >>>>>>>>>>> >>>>>>>>>>> I would need to use the routine DMDAGetElements() in our Fortran code. >>>>>>>>>>> However, as I read from the manual, there is no Fortran support for this routine. >>>>>>>>>>> Is there any similar alternative there? >>>>>>>>>>> >>>>>>>>>>> Many thanks! >>>>>>>>>>> Best regards, >>>>>>>>>>> Monica Huynh >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 30 08:56:55 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 30 Jun 2023 09:56:55 -0400 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> Message-ID: <7FDF0ACC-CBF1-427D-B1F2-425D8877DB9D@petsc.dev> Oh, I forgot to mention you should first check that the PCMG works quite well for the full system (without the PCREDISTRIBUTE); the convergence on the redistributed system (assuming you did all the work to get PCMG to work for you) should be very similar to (but not measurably better) than the convergence on the full system. > On Jun 30, 2023, at 9:17 AM, Barry Smith wrote: > > > ex42.c provides directly the interpolation/restriction needed to move between levels in the loop > > for (k = 1; k < nlevels; k++) { > PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL)); > PetscCall(PCMGSetInterpolation(pc, k, R)); > PetscCall(MatDestroy(&R)); > } > > The more standard alternative to this is to call KSPSetDM() and have the PCMG setup use the DM > to construct the interpolations (I don't know why ex42.c does this construction itself instead of having the KSPSetDM() process handle it but that doesn't matter). The end result is the same in both cases. > > Since PCREDISTRIBUTE builds its own new matrix (by using only certain rows and columns of the original matrix) the original interpolation > cannot be used for two reasons > > 1) (since it is for the full system) It is for the wrong problem. > > 2) In addition, if you ran with ex42.c the inner KSP does not have access to the interpolation that was constructed so you could not get PCMG to to work as indicated below. > > I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM > and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1. > > So the short answer is that there is no "canned" way to use the PCMG process trivially with PCDISTRIBUTE. > > To do what you want requires two additional steps > > 1) after you construct the full interpolation matrix (by using the DM) you need to remove the rows associated with the dof that have been removed by the "locked" variables (and the columns that are associated with coarse grid points that live on the removed points) so that the interpolation is the correct "size" for the smaller problem > > 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI processes for load balancing after it has removed the locked variables you would need to do the exact same movement for the rows of the interpolation matrix that you have constructed (after you have removed the "locked" rows of the interpolation. > > Lots of bookkeeping to acheive 1 and 2 but conceptually simple. > > As an experiment you can try using PCGAMG on the redistributed matrix -redistribute_pc_type gamg to use algebraic multigrid just to see the time and convergence rates. Since GAMG creates its own interpolation based on the matrix and it will be built on the smaller redistributed matrix there will no issue with the wrong "sized" interpolation. Of course you have the overhead of algebraic multigrid and cannot take advantage of geometric multigrid. The GAMG approach may be satisfactory to your needs. > > If you are game for looking more closely at using redistribute with geometric multigrid and PETSc (which will require digging into PETSc source code and using internal information in the PETSc source code) you can start by looking at how we solve variational problems with SNES using reduced space active set methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c This code solves problem 1 see DMSetVI() it builds the entire interpolation and then pulls out the required non-locked part. Reduced space active set methods essentially lock the constrained dof and solve a smaller system without those dof at each iteration. > > But it does not solve problem 2. Moving the rows of the "smaller" interpolation to the correct MPI process based on where PCREDISTRIBUTE moved rows. To do this would requring looking at the PCREDISTRUBUTE code and extracting the information of where each row was moving and performing the process for the interpolation matrix. > src/ksp/pc/impls/redistribute/redistribute.c > > Barry > > > > > > > > > >> On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users wrote: >> >> Hi, >> >> I'm trying to run an iterative solver (FGMRES for example) with PCMG as preconditioner. The setup of PCMG >> is done roughly as in ex42 of the PETSc-tutorials (https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). >> Since I have many locked degrees-of-freedom I would like to use PCREDISTRIBUTE. However, this >> results in (30039 is the number of DOFs after redistribute and 55539 the number before): >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Nonconforming object sizes >> [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for MatProductType PtAP: A 30039x30039, P 55539x7803 >> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. >> [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 GIT Date: 2023-04-24 16:37:00 +0200 >> [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 >> [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 --download-scalapack --download-hdf5 --download-zlib --download-mumps --download-parmetis --download-metis --download-ptscotch --download-hypre --download-spai >> [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 >> [0]PETSC ERROR: #2 MatProductSetFromOptions() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 >> [0]PETSC ERROR: #3 MatPtAP() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 >> [0]PETSC ERROR: #4 MatGalerkin() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 >> [0]PETSC ERROR: #5 PCSetUp_MG() at /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 >> [0]PETSC ERROR: #6 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 >> [0]PETSC ERROR: #7 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 >> [0]PETSC ERROR: #8 PCSetUp_Redistribute() at /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 >> [0]PETSC ERROR: #9 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 >> [0]PETSC ERROR: #10 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 >> [0]PETSC ERROR: #11 KSPSolve_Private() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 >> [0]PETSC ERROR: #12 KSPSolve() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 >> >> It?s clear what happens I think, and it kind of make since not all levels are redistributed as they should (?). >> Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? >> >> Kind regards, >> Carl-Johan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 30 09:01:23 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 30 Jun 2023 10:01:23 -0400 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: References: Message-ID: Note that options like -mat_type aijcusparse -vec_type cuda only work if the program is set up to allow runtime swapping of matrix and vector types. If you have a call to MatCreateMPIAIJ() or other specific types then then these options do nothing but because Mark had you use -options_left the program will tell you at the end that it did not use the option so you will know. > On Jun 30, 2023, at 9:30 AM, Mark Adams wrote: > > PetscCall(PetscInitialize(&argc, &argv, NULL, help)); gives us the args and you run: > > a.out -mat_type aijcusparse -vec_type cuda -log_view -options_left > > Mark > > On Fri, Jun 30, 2023 at 6:16?AM Matthew Knepley > wrote: >> On Fri, Jun 30, 2023 at 1:13?AM Ng, Cho-Kuen via petsc-users > wrote: >>> Mark, >>> >>> The application code reads in parameters from an input file, where we can put the PETSc runtime options. Then we pass the options to PetscInitialize(...). Does that sounds right? >> >> PETSc will read command line argument automatically in PetscInitialize() unless you shut it off. >> >> Thanks, >> >> Matt >> >>> Cho >>> From: Ng, Cho-Kuen > >>> Sent: Thursday, June 29, 2023 8:32 PM >>> To: Mark Adams > >>> Cc: petsc-users at mcs.anl.gov > >>> Subject: Re: [petsc-users] Using PETSc GPU backend >>> >>> Mark, >>> >>> Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options? >>> >>> Cho >>> From: Mark Adams > >>> Sent: Thursday, June 29, 2023 5:55 PM >>> To: Ng, Cho-Kuen > >>> Cc: petsc-users at mcs.anl.gov > >>> Subject: Re: [petsc-users] Using PETSc GPU backend >>> >>> Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left >>> >>> The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0. >>> >>> The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left. >>> >>> Mark >>> >>> On Thu, Jun 29, 2023 at 7:50?PM Ng, Cho-Kuen via petsc-users > wrote: >>> I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend? >>> >>> Thanks, >>> Cho >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl-johan.thore at liu.se Fri Jun 30 09:16:35 2023 From: carl-johan.thore at liu.se (Carl-Johan Thore) Date: Fri, 30 Jun 2023 14:16:35 +0000 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: <7FDF0ACC-CBF1-427D-B1F2-425D8877DB9D@petsc.dev> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> <7FDF0ACC-CBF1-427D-B1F2-425D8877DB9D@petsc.dev> Message-ID: Thanks for the quick reply and the suggestions! " ... you should first check that the PCMG works quite well " Yes, the PCMG works very well for the full system. "I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1." Yes, it's slightly different so problem 2 should be solved. It looked somewhat complicated to get PCMG to work with redistribute, so I'll try with PCGAMG first (it ran immediately with redistribute, but was slower than PCMG on my, very small, test problem. I'll try to tune the settings). A related question: I'm here using a DMDA for a structured grid but I'm locking so many DOFs that for many of the elements all DOFs are locked. In such a case could it make sense to switch/convert the DMDA to a DMPlex containing only those elements that actually have DOFs? From: Barry Smith Sent: Friday, June 30, 2023 3:57 PM To: Carl-Johan Thore Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE Oh, I forgot to mention you should first check that the PCMG works quite well for the full system (without the PCREDISTRIBUTE); the convergence on the redistributed system (assuming you did all the work to get PCMG to work for you) should be very similar to (but not measurably better) than the convergence on the full system. On Jun 30, 2023, at 9:17 AM, Barry Smith > wrote: ex42.c provides directly the interpolation/restriction needed to move between levels in the loop for (k = 1; k < nlevels; k++) { PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL)); PetscCall(PCMGSetInterpolation(pc, k, R)); PetscCall(MatDestroy(&R)); } The more standard alternative to this is to call KSPSetDM() and have the PCMG setup use the DM to construct the interpolations (I don't know why ex42.c does this construction itself instead of having the KSPSetDM() process handle it but that doesn't matter). The end result is the same in both cases. Since PCREDISTRIBUTE builds its own new matrix (by using only certain rows and columns of the original matrix) the original interpolation cannot be used for two reasons 1) (since it is for the full system) It is for the wrong problem. 2) In addition, if you ran with ex42.c the inner KSP does not have access to the interpolation that was constructed so you could not get PCMG to to work as indicated below. I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1. So the short answer is that there is no "canned" way to use the PCMG process trivially with PCDISTRIBUTE. To do what you want requires two additional steps 1) after you construct the full interpolation matrix (by using the DM) you need to remove the rows associated with the dof that have been removed by the "locked" variables (and the columns that are associated with coarse grid points that live on the removed points) so that the interpolation is the correct "size" for the smaller problem 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI processes for load balancing after it has removed the locked variables you would need to do the exact same movement for the rows of the interpolation matrix that you have constructed (after you have removed the "locked" rows of the interpolation. Lots of bookkeeping to acheive 1 and 2 but conceptually simple. As an experiment you can try using PCGAMG on the redistributed matrix -redistribute_pc_type gamg to use algebraic multigrid just to see the time and convergence rates. Since GAMG creates its own interpolation based on the matrix and it will be built on the smaller redistributed matrix there will no issue with the wrong "sized" interpolation. Of course you have the overhead of algebraic multigrid and cannot take advantage of geometric multigrid. The GAMG approach may be satisfactory to your needs. If you are game for looking more closely at using redistribute with geometric multigrid and PETSc (which will require digging into PETSc source code and using internal information in the PETSc source code) you can start by looking at how we solve variational problems with SNES using reduced space active set methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c This code solves problem 1 see DMSetVI() it builds the entire interpolation and then pulls out the required non-locked part. Reduced space active set methods essentially lock the constrained dof and solve a smaller system without those dof at each iteration. But it does not solve problem 2. Moving the rows of the "smaller" interpolation to the correct MPI process based on where PCREDISTRIBUTE moved rows. To do this would requring looking at the PCREDISTRUBUTE code and extracting the information of where each row was moving and performing the process for the interpolation matrix. src/ksp/pc/impls/redistribute/redistribute.c Barry On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users > wrote: Hi, I'm trying to run an iterative solver (FGMRES for example) with PCMG as preconditioner. The setup of PCMG is done roughly as in ex42 of the PETSc-tutorials (https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). Since I have many locked degrees-of-freedom I would like to use PCREDISTRIBUTE. However, this results in (30039 is the number of DOFs after redistribute and 55539 the number before): [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Nonconforming object sizes [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for MatProductType PtAP: A 30039x30039, P 55539x7803 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 GIT Date: 2023-04-24 16:37:00 +0200 [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 --download-scalapack --download-hdf5 --download-zlib --download-mumps --download-parmetis --download-metis --download-ptscotch --download-hypre --download-spai [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 [0]PETSC ERROR: #2 MatProductSetFromOptions() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 [0]PETSC ERROR: #3 MatPtAP() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 [0]PETSC ERROR: #4 MatGalerkin() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 [0]PETSC ERROR: #5 PCSetUp_MG() at /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 [0]PETSC ERROR: #6 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 [0]PETSC ERROR: #7 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 [0]PETSC ERROR: #8 PCSetUp_Redistribute() at /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 [0]PETSC ERROR: #9 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 [0]PETSC ERROR: #10 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 [0]PETSC ERROR: #11 KSPSolve_Private() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 [0]PETSC ERROR: #12 KSPSolve() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 It's clear what happens I think, and it kind of make since not all levels are redistributed as they should (?). Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? Kind regards, Carl-Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 30 09:22:21 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 30 Jun 2023 10:22:21 -0400 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> <7FDF0ACC-CBF1-427D-B1F2-425D8877DB9D@petsc.dev> Message-ID: On Fri, Jun 30, 2023 at 10:16?AM Carl-Johan Thore via petsc-users < petsc-users at mcs.anl.gov> wrote: > Thanks for the quick reply and the suggestions! > > > > ? ? you should first check that the PCMG works quite well ? > > > > Yes, the PCMG works very well for the full system. > > > > ?I am guessing that your code is slightly different than ex42.c because > you take the interpolation matrix provided by the DM > > and give it to the inner KSP PCMG?. So you solve problem 2 but not problem > 1.? > > > > Yes, it?s slightly different so problem 2 should be solved. > > > > It looked somewhat complicated to get PCMG to work with redistribute, so > I?ll try with PCGAMG first > > (it ran immediately with redistribute, but was slower than PCMG on my, > very small, test problem. I?ll try > > to tune the settings). > > > > A related question: I?m here using a DMDA for a structured grid but I?m > locking so many DOFs that for many of the elements > > all DOFs are locked. In such a case could it make sense to switch/convert > the DMDA to a DMPlex containing only those > > elements that actually have DOFs? > Possibly, but if you are doing FD, then there is built-in topology in DMDA that is not present in Plex, so finding the neighbors in the right order is harder (possible, but harder, we address this in some new work that is not yet merged). There is also structured adaptive support with DMForest, but this also does not preserve the stencil. Thanks, Matt > *From:* Barry Smith > *Sent:* Friday, June 30, 2023 3:57 PM > *To:* Carl-Johan Thore > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] PCMG with PCREDISTRIBUTE > > > > > > Oh, I forgot to mention you should first check that the PCMG works > quite well for the full system (without the PCREDISTRIBUTE); the convergence > > on the redistributed system (assuming you did all the work to get PCMG to > work for you) should be very similar to (but not measurably better) than > the convergence on the full system. > > > > > > > > On Jun 30, 2023, at 9:17 AM, Barry Smith wrote: > > > > > > ex42.c provides directly the interpolation/restriction needed to move > between levels in the loop > > > > for (k = 1; k < nlevels; k++) { > > PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL)); > > PetscCall(PCMGSetInterpolation(pc, k, R)); > > PetscCall(MatDestroy(&R)); > > } > > > > The more standard alternative to this is to call KSPSetDM() and have the > PCMG setup use the DM > > to construct the interpolations (I don't know why ex42.c does this > construction itself instead of having the KSPSetDM() process handle it but > that doesn't matter). The end result is the same in both cases. > > > > Since PCREDISTRIBUTE builds its own new matrix (by using only certain > rows and columns of the original matrix) the original interpolation > > cannot be used for two reasons > > > > 1) (since it is for the full system) It is for the wrong problem. > > > > 2) In addition, if you ran with ex42.c the inner KSP does not have access > to the interpolation that was constructed so you could not get PCMG to to > work as indicated below. > > > > I am guessing that your code is slightly different than ex42.c because you > take the interpolation matrix provided by the DM > > and give it to the inner KSP PCMG?. So you solve problem 2 but not problem > 1. > > > > So the short answer is that there is no "canned" way to use the PCMG > process trivially with PCDISTRIBUTE. > > > > To do what you want requires two additional steps > > > > 1) after you construct the full interpolation matrix (by using the DM) > you need to remove the rows associated with the dof that have been removed > by the "locked" variables (and the columns that are associated with coarse > grid points that live on the removed points) so that the interpolation is > the correct "size" for the smaller problem > > > > 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI > processes for load balancing after it has removed the locked variables you > would need to do the exact same movement for the rows of the interpolation > matrix that you have constructed (after you have removed the "locked" rows > of the interpolation. > > > > Lots of bookkeeping to acheive 1 and 2 but conceptually simple. > > > > As an experiment you can try using PCGAMG on the redistributed matrix > -redistribute_pc_type gamg to use algebraic multigrid just to see the time > and convergence rates. Since GAMG creates its own interpolation based on > the matrix and it will be built on the smaller redistributed matrix there > will no issue with the wrong "sized" interpolation. Of course you have the > overhead of algebraic multigrid and cannot take advantage of geometric > multigrid. The GAMG approach may be satisfactory to your needs. > > > > If you are game for looking more closely at using redistribute with > geometric multigrid and PETSc (which will require digging into PETSc source > code and using internal information in the PETSc source code) you can start > by looking at how we solve variational problems with SNES using reduced > space active set methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c > This code solves problem 1 see DMSetVI() it builds the entire interpolation > and then pulls out the required non-locked part. Reduced space active set > methods essentially lock the constrained dof and solve a smaller system > without those dof at each iteration. > > > > But it does not solve problem 2. Moving the rows of the "smaller" > interpolation to the correct MPI process based on where PCREDISTRIBUTE > moved rows. To do this would requring looking at the PCREDISTRUBUTE code > and extracting the information of where each row was moving and performing > the process for the interpolation matrix. > > src/ksp/pc/impls/redistribute/redistribute.c > > > > Barry > > > > > > > > > > > > > > > > > > > > On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > > Hi, > > > > I'm trying to run an iterative solver (FGMRES for example) with PCMG as > preconditioner. The setup of PCMG > > is done roughly as in ex42 of the PETSc-tutorials ( > https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). > > Since I have many locked degrees-of-freedom I would like to use > PCREDISTRIBUTE. However, this > > results in (30039 is the number of DOFs after redistribute and 55539 the > number before): > > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [0]PETSC ERROR: Nonconforming object sizes > > [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for > MatProductType PtAP: A 30039x30039, P 55539x7803 > > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > > [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 > GIT Date: 2023-04-24 16:37:00 +0200 > > [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 > > [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" > CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" > CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 > --download-scalapack --download-hdf5 --download-zlib --download-mumps > --download-parmetis --download-metis --download-ptscotch --download-hypre > --download-spai > > [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at > /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 > > [0]PETSC ERROR: #2 MatProductSetFromOptions() at > /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 > > [0]PETSC ERROR: #3 MatPtAP() at > /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 > > [0]PETSC ERROR: #4 MatGalerkin() at > /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 > > [0]PETSC ERROR: #5 PCSetUp_MG() at > /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 > > [0]PETSC ERROR: #6 PCSetUp() at > /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 > > [0]PETSC ERROR: #7 KSPSetUp() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 > > [0]PETSC ERROR: #8 PCSetUp_Redistribute() at > /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 > > [0]PETSC ERROR: #9 PCSetUp() at > /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 > > [0]PETSC ERROR: #10 KSPSetUp() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 > > [0]PETSC ERROR: #11 KSPSolve_Private() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 > > [0]PETSC ERROR: #12 KSPSolve() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 > > > > It?s clear what happens I think, and it kind of make since not all levels > are redistributed as they should (?). > > Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? > > > > Kind regards, > > Carl-Johan > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngocmaimonica.huynh at unipv.it Fri Jun 30 09:48:13 2023 From: ngocmaimonica.huynh at unipv.it (Ngoc Mai Monica Huynh) Date: Fri, 30 Jun 2023 16:48:13 +0200 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <7B3693E0-F782-43A7-BA8C-DFD152471DEE@unipv.it> <25C8D9F5-5CB4-45F5-A0FA-74B383E5C08B@unipv.it> Message-ID: <90CAFA2B-577C-4C03-8026-FC586C0FA477@unipv.it> Hi, yes, it runs and now also my code. I moved it from the extension .F to .F90. (With my older codes the extension .F still works fine, but not with this one) Thank you for the patience and support! Monica > On 30 Jun 2023, at 15:54, Barry Smith wrote: > > > I glued your code fragment into a stand-alone program and it runs fine for me on 16 ranks. Does this simple program run for you? > > program main > #include > use petsc > implicit none > integer ierr > > MPI_Comm comm > DM da3d > ISLocalToGlobalMapping map > PetscInt nel,nen > PetscInt, pointer :: e_loc(:) > > PetscCallA(PetscInitialize(ierr)) > comm = PETSC_COMM_WORLD > call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, & > & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, & > & 8,2,1,3,1,PETSC_NULL_INTEGER, & > & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, & > & da3d,ierr) > call DMSetMatType(da3d,MATIS,ierr) > call DMSetFromOptions(da3d,ierr) > call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) > call DMSetUp(da3d,ierr) > call DMGetLocalToGlobalMapping(da3d,map,ierr) > > call DMDAGetElements(da3d,nel,nen,e_loc,ierr) > call DMDARestoreElements(da3d,nel,nen,e_loc,ierr) > PetscCallA(DMDestroy(da3d,ierr)) > > PetscCallA(PetscFinalize(ierr) > > > >> On Jun 30, 2023, at 9:21 AM, Ngoc Mai Monica Huynh wrote: >> >> Yes, I have the #include at the top of the code. >> >> Thank you very much for your help. >> I?ll let you know if I have any improvements from my side. >> Looking forward to hearing from you. >> >> Thanks, >> Monica >> >>> On 30 Jun 2023, at 15:08, Matthew Knepley > wrote: >>> >>> On Fri, Jun 30, 2023 at 8:38?AM Ngoc Mai Monica Huynh > wrote: >>> Yes, it compiles and run correctly >>> >>> Okay, then we try to alter that example until it looks like your test. >>> >>> One thing is the #include at the top. Do you have that in your code? If Fortran does not find the interface, >>> then it will just SEGV. >>> >>> Thanks, >>> >>> Matt >>> >>> Monica >>> >>>> On 30 Jun 2023, at 12:50, Matthew Knepley > wrote: >>>> >>>> On Fri, Jun 30, 2023 at 6:47?AM Ngoc Mai Monica Huynh > wrote: >>>> Hi, >>>> >>>> I have no problem now in compiling, thank you for providing the Fortran interface. >>>> I have a follow up question. >>>> When running the code, I get this error, which I?m pretty sure it is related to DMDAGetElements(), since up to that line everything works fine. >>>> >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/ >>>> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>>> [0]PETSC ERROR: No error traceback is available, the problem could be in the main program. >>>> -------------------------------------------------------------------------- >>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >>>> with errorcode 59. >>>> >>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >>>> You may or may not see output from other processes, depending on >>>> exactly when Open MPI kills them. >>>> ????????????????????????????????????? >>>> >>>> The lines of code I?m working on are the following: >>>> >>>> integer ierr >>>> >>>> MPI_Comm comm >>>> DM da3d >>>> ISLocalToGlobalMapping map >>>> PetscInt nel,nen >>>> PetscInt, pointer :: e_loc(:) >>>> >>>> call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >>>> & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, >>>> & 8,2,1,3,1,PETSC_NULL_INTEGER, >>>> & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, >>>> & da3d,ierr) >>>> call DMSetMatType(da3d,MATIS,ierr) >>>> call DMSetFromOptions(da3d,ierr) >>>> call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) >>>> call DMSetUp(da3d,ierr) >>>> call DMGetLocalToGlobalMapping(da3d,map,ierr) >>>> >>>> call DMDAGetElements(da3d,nel,nen,e_loc,ierr) >>>> >>>> By printing in a dummy way any kind of message before and after DMDAGetElements(), I cannot pass over it. >>>> Unfortunately, I cannot run with the debug option on this machine. >>>> Am I calling the routine in a wrong way? >>>> >>>> Does >>>> >>>> src/dm/tutorials/ex11f90.F90 >>>> >>>> run for you? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> Thanks, >>>> Monica >>>> >>>> >>>>> On 29 Jun 2023, at 21:09, Matthew Knepley > wrote: >>>>> >>>>> On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh > wrote: >>>>> Thank you. >>>>> Does this mean that DMDARestoreElements() is supported as well now? >>>>> >>>>> Yes. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> Monica >>>>> >>>>> >>>>>> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith > ha scritto: >>>>>> >>>>>> ? >>>>>> >>>>>> The code is ready in the branch barry/2023-06-29/add-dmdagetelements-fortran https://gitlab.com/petsc/petsc/-/merge_requests/6647 >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>>> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh > wrote: >>>>>>> >>>>>>> That would be amazing, thank you very much! >>>>>>> Monica >>>>>>> >>>>>>>> On 29 Jun 2023, at 18:38, Barry Smith > wrote: >>>>>>>> >>>>>>>> >>>>>>>> I can provide the Fortran interface this afternoon. >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> >>>>>>>>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh > wrote: >>>>>>>>> >>>>>>>>> Hi everyone, >>>>>>>>> >>>>>>>>> I would need to use the routine DMDAGetElements() in our Fortran code. >>>>>>>>> However, as I read from the manual, there is no Fortran support for this routine. >>>>>>>>> Is there any similar alternative there? >>>>>>>>> >>>>>>>>> Many thanks! >>>>>>>>> Best regards, >>>>>>>>> Monica Huynh >>>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 30 09:50:18 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 30 Jun 2023 10:50:18 -0400 Subject: [petsc-users] Fortran alternative for DMDAGetElements? In-Reply-To: <90CAFA2B-577C-4C03-8026-FC586C0FA477@unipv.it> References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <7B3693E0-F782-43A7-BA8C-DFD152471DEE@unipv.it> <25C8D9F5-5CB4-45F5-A0FA-74B383E5C08B@unipv.it> <90CAFA2B-577C-4C03-8026-FC586C0FA477@unipv.it> Message-ID: On Fri, Jun 30, 2023 at 10:48?AM Ngoc Mai Monica Huynh < ngocmaimonica.huynh at unipv.it> wrote: > Hi, > yes, it runs and now also my code. > I moved it from the extension .F to .F90. > (With my older codes the extension .F still works fine, but not with this > one) > Yes, you need .F90 to properly handle the interface definitions. Thanks, Matt > Thank you for the patience and support! > Monica > > On 30 Jun 2023, at 15:54, Barry Smith wrote: > > > I glued your code fragment into a stand-alone program and it runs fine > for me on 16 ranks. Does this simple program run for you? > > program main > #include > use petsc > implicit none > integer ierr > > MPI_Comm comm > DM da3d > ISLocalToGlobalMapping map > PetscInt nel,nen > PetscInt, pointer :: e_loc(:) > > PetscCallA(PetscInitialize(ierr)) > comm = PETSC_COMM_WORLD > call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, & > & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, & > & 8,2,1,3,1,PETSC_NULL_INTEGER, & > & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, & > & da3d,ierr) > call DMSetMatType(da3d,MATIS,ierr) > call DMSetFromOptions(da3d,ierr) > call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) > call DMSetUp(da3d,ierr) > call DMGetLocalToGlobalMapping(da3d,map,ierr) > > call DMDAGetElements(da3d,nel,nen,e_loc,ierr) > call DMDARestoreElements(da3d,nel,nen,e_loc,ierr) > PetscCallA(DMDestroy(da3d,ierr)) > > PetscCallA(PetscFinalize(ierr) > > > > On Jun 30, 2023, at 9:21 AM, Ngoc Mai Monica Huynh < > ngocmaimonica.huynh at unipv.it> wrote: > > Yes, I have the #include at the top of the code. > > Thank you very much for your help. > I?ll let you know if I have any improvements from my side. > Looking forward to hearing from you. > > Thanks, > Monica > > On 30 Jun 2023, at 15:08, Matthew Knepley wrote: > > On Fri, Jun 30, 2023 at 8:38?AM Ngoc Mai Monica Huynh < > ngocmaimonica.huynh at unipv.it> wrote: > >> Yes, it compiles and run correctly >> > > Okay, then we try to alter that example until it looks like your test. > > One thing is the #include at the top. Do you have that in your code? If > Fortran does not find the interface, > then it will just SEGV. > > Thanks, > > Matt > > >> Monica >> >> On 30 Jun 2023, at 12:50, Matthew Knepley wrote: >> >> On Fri, Jun 30, 2023 at 6:47?AM Ngoc Mai Monica Huynh < >> ngocmaimonica.huynh at unipv.it> wrote: >> >>> Hi, >>> >>> I have no problem now in compiling, thank you for providing the Fortran >>> interface. >>> I have a follow up question. >>> When running the code, I get this error, which I?m pretty sure it is >>> related to DMDAGetElements(), since up to that line everything works fine. >>> >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and >>> https://petsc.org/release/faq/ >>> [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [0]PETSC ERROR: No error traceback is available, the problem could be in >>> the main program. >>> >>> -------------------------------------------------------------------------- >>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >>> with errorcode 59. >>> >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >>> You may or may not see output from other processes, depending on >>> exactly when Open MPI kills them. >>> ????????????????????????????????????? >>> >>> The lines of code I?m working on are the following: >>> >>> integer ierr >>> >>> MPI_Comm comm >>> DM da3d >>> ISLocalToGlobalMapping map >>> PetscInt nel,nen >>> PetscInt, pointer :: e_loc(:) >>> >>> call DMDACreate3d(comm,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >>> & DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,433,41,29, >>> & 8,2,1,3,1,PETSC_NULL_INTEGER, >>> & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, >>> & da3d,ierr) >>> call DMSetMatType(da3d,MATIS,ierr) >>> call DMSetFromOptions(da3d,ierr) >>> call DMDASetElementType(da3d,DMDA_ELEMENT_Q1,ierr) >>> call DMSetUp(da3d,ierr) >>> call DMGetLocalToGlobalMapping(da3d,map,ierr) >>> >>> call DMDAGetElements(da3d,nel,nen,e_loc,ierr) >>> >>> By printing in a dummy way any kind of message before and after >>> DMDAGetElements(), I cannot pass over it. >>> Unfortunately, I cannot run with the debug option on this machine. >>> Am I calling the routine in a wrong way? >>> >> >> Does >> >> src/dm/tutorials/ex11f90.F90 >> >> run for you? >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Monica >>> >>> >>> On 29 Jun 2023, at 21:09, Matthew Knepley wrote: >>> >>> On Thu, Jun 29, 2023 at 3:05?PM Ngoc Mai Monica Huynh < >>> ngocmaimonica.huynh at unipv.it> wrote: >>> >>>> Thank you. >>>> Does this mean that DMDARestoreElements() is supported as well now? >>>> >>> >>> Yes. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Monica >>>> >>>> >>>> Il giorno 29 giu 2023, alle ore 20:17, Barry Smith >>>> ha scritto: >>>> >>>> ? >>>> >>>> The code is ready in the branch >>>> *barry/2023-06-29/add-dmdagetelements-fortran * >>>> https://gitlab.com/petsc/petsc/-/merge_requests/6647 >>>> >>>> Barry >>>> >>>> >>>> On Jun 29, 2023, at 12:41 PM, Ngoc Mai Monica Huynh < >>>> ngocmaimonica.huynh at unipv.it> wrote: >>>> >>>> That would be amazing, thank you very much! >>>> Monica >>>> >>>> On 29 Jun 2023, at 18:38, Barry Smith wrote: >>>> >>>> >>>> I can provide the Fortran interface this afternoon. >>>> >>>> Barry >>>> >>>> >>>> On Jun 29, 2023, at 10:48 AM, Ngoc Mai Monica Huynh < >>>> ngocmaimonica.huynh at unipv.it> wrote: >>>> >>>> Hi everyone, >>>> >>>> I would need to use the routine DMDAGetElements() in our Fortran code. >>>> However, as I read from the manual, there is no Fortran support for >>>> this routine. >>>> Is there any similar alternative there? >>>> >>>> Many thanks! >>>> Best regards, >>>> Monica Huynh >>>> >>>> >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cho at slac.stanford.edu Fri Jun 30 09:57:36 2023 From: cho at slac.stanford.edu (Ng, Cho-Kuen) Date: Fri, 30 Jun 2023 14:57:36 +0000 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: References: Message-ID: Barry, Mark and Matt, Thank you all for the suggestions. I will modify the code so we can pass runtime options. Cho ________________________________ From: Barry Smith Sent: Friday, June 30, 2023 7:01 AM To: Mark Adams Cc: Matthew Knepley ; Ng, Cho-Kuen ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Using PETSc GPU backend Note that options like -mat_type aijcusparse -vec_type cuda only work if the program is set up to allow runtime swapping of matrix and vector types. If you have a call to MatCreateMPIAIJ() or other specific types then then these options do nothing but because Mark had you use -options_left the program will tell you at the end that it did not use the option so you will know. On Jun 30, 2023, at 9:30 AM, Mark Adams wrote: PetscCall(PetscInitialize(&argc, &argv, NULL, help)); gives us the args and you run: a.out -mat_type aijcusparse -vec_type cuda -log_view -options_left Mark On Fri, Jun 30, 2023 at 6:16?AM Matthew Knepley > wrote: On Fri, Jun 30, 2023 at 1:13?AM Ng, Cho-Kuen via petsc-users > wrote: Mark, The application code reads in parameters from an input file, where we can put the PETSc runtime options. Then we pass the options to PetscInitialize(...). Does that sounds right? PETSc will read command line argument automatically in PetscInitialize() unless you shut it off. Thanks, Matt Cho ________________________________ From: Ng, Cho-Kuen > Sent: Thursday, June 29, 2023 8:32 PM To: Mark Adams > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Using PETSc GPU backend Mark, Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options? Cho ________________________________ From: Mark Adams > Sent: Thursday, June 29, 2023 5:55 PM To: Ng, Cho-Kuen > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Using PETSc GPU backend Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0. The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left. Mark On Thu, Jun 29, 2023 at 7:50?PM Ng, Cho-Kuen via petsc-users > wrote: I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend? Thanks, Cho -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cho at slac.stanford.edu Fri Jun 30 10:00:31 2023 From: cho at slac.stanford.edu (Ng, Cho-Kuen) Date: Fri, 30 Jun 2023 15:00:31 +0000 Subject: [petsc-users] Using PETSc GPU backend In-Reply-To: <30290-649eb780-1f-420f1180@218347007> References: <30290-649eb780-1f-420f1180@218347007> Message-ID: Paul, Thank you for your suggestion. I will try different spack install specifications. Cho ________________________________ From: Grosse-Bley, Paul Leonard Sent: Friday, June 30, 2023 4:07 AM To: Ng, Cho-Kuen Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Using PETSc GPU backend Hi Cho, you might want to specify the GPU architecture to make sure that everything is compiled optimally. I.e. "spack install petsc +cuda cuda_arch=80 +zoltan" Best, Paul On Friday, June 30, 2023 01:50 CEST, petsc-users-request at mcs.anl.gov wrote: Date: Thu, 29 Jun 2023 23:50:10 +0000 From: "Ng, Cho-Kuen" To: "petsc-users at mcs.anl.gov" Subject: [petsc-users] Using PETSc GPU backend Message-ID: Content-Type: text/plain; charset="iso-8859-1" I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend? Thanks, Cho -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 30 10:20:43 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 30 Jun 2023 11:20:43 -0400 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> <7FDF0ACC-CBF1-427D-B1F2-425D8877DB9D@petsc.dev> Message-ID: > On Jun 30, 2023, at 10:22 AM, Matthew Knepley wrote: > > On Fri, Jun 30, 2023 at 10:16?AM Carl-Johan Thore via petsc-users > wrote: >> Thanks for the quick reply and the suggestions! >> >> >> >> ? ? you should first check that the PCMG works quite well ? >> >> >> >> Yes, the PCMG works very well for the full system. >> >> >> >> ?I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM >> >> and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1.? >> >> >> >> Yes, it?s slightly different so problem 2 should be solved. >> >> >> >> It looked somewhat complicated to get PCMG to work with redistribute, so I?ll try with PCGAMG first >> >> (it ran immediately with redistribute, but was slower than PCMG on my, very small, test problem. I?ll try >> >> to tune the settings). >> >> >> >> A related question: I?m here using a DMDA for a structured grid but I?m locking so many DOFs that for many of the elements >> >> all DOFs are locked. In such a case could it make sense to switch/convert the DMDA to a DMPlex containing only those >> >> elements that actually have DOFs? >> > > Possibly, but if you are doing FD, then there is built-in topology in DMDA that is not present in Plex, so > finding the neighbors in the right order is harder (possible, but harder, we address this in some new work that is not yet merged). There is also structured adaptive support with DMForest, but this also does not preserve the stencil. The efficiency of active set VI solvers in PETSc demonstrates to me that solving reduced systems can be done efficiently with geometric multigrid using a structured grid so I would not suggest giving up on what you started. You can do it in two steps 1) Use PCREDISTRIBUTE but hack the code in redistribute.c to not move dof between MPI ranks, just have it remove the locked rows/columns (to start just run on one MPI rank since then nothing is moved) Then in your code you just need to pull out the appropriate rows and columns of the interpolation that correspond to the dof you have kept and pass this smaller interpolation to the inner KSP PCMG. This is straightforward and like what is in DMSetVI. The MG convergence should be just as good as on the full system. 2) the only problem with 1 is it is likely to be poorly load balanced (but you can make some runs to see how imbalanced it is, that will depend exactly on what parts are locked and what MPI processes they are on). So if it is poorly balanced then you would need to get out of redistribute.c a mapping for each kept dof to what MPI rank it is moved to and use that to move the entries in the reduced interpolation you have created. If you do succeed it would actually be useful code that we could add to PCREDISTRIBUTE for more general use by others. Barry > > Thanks, > > Matt > >> From: Barry Smith > >> Sent: Friday, June 30, 2023 3:57 PM >> To: Carl-Johan Thore > >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE >> >> >> >> >> >> Oh, I forgot to mention you should first check that the PCMG works quite well for the full system (without the PCREDISTRIBUTE); the convergence >> >> on the redistributed system (assuming you did all the work to get PCMG to work for you) should be very similar to (but not measurably better) than the convergence on the full system. >> >> >> >> >> >> >> >> >> On Jun 30, 2023, at 9:17 AM, Barry Smith > wrote: >> >> >> >> >> >> ex42.c provides directly the interpolation/restriction needed to move between levels in the loop >> >> >> >> for (k = 1; k < nlevels; k++) { >> >> PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL)); >> >> PetscCall(PCMGSetInterpolation(pc, k, R)); >> >> PetscCall(MatDestroy(&R)); >> >> } >> >> >> >> The more standard alternative to this is to call KSPSetDM() and have the PCMG setup use the DM >> >> to construct the interpolations (I don't know why ex42.c does this construction itself instead of having the KSPSetDM() process handle it but that doesn't matter). The end result is the same in both cases. >> >> >> >> Since PCREDISTRIBUTE builds its own new matrix (by using only certain rows and columns of the original matrix) the original interpolation >> >> cannot be used for two reasons >> >> >> >> 1) (since it is for the full system) It is for the wrong problem. >> >> >> >> 2) In addition, if you ran with ex42.c the inner KSP does not have access to the interpolation that was constructed so you could not get PCMG to to work as indicated below. >> >> >> >> I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM >> >> and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1. >> >> >> >> So the short answer is that there is no "canned" way to use the PCMG process trivially with PCDISTRIBUTE. >> >> >> >> To do what you want requires two additional steps >> >> >> >> 1) after you construct the full interpolation matrix (by using the DM) you need to remove the rows associated with the dof that have been removed by the "locked" variables (and the columns that are associated with coarse grid points that live on the removed points) so that the interpolation is the correct "size" for the smaller problem >> >> >> >> 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI processes for load balancing after it has removed the locked variables you would need to do the exact same movement for the rows of the interpolation matrix that you have constructed (after you have removed the "locked" rows of the interpolation. >> >> >> >> Lots of bookkeeping to acheive 1 and 2 but conceptually simple. >> >> >> >> As an experiment you can try using PCGAMG on the redistributed matrix -redistribute_pc_type gamg to use algebraic multigrid just to see the time and convergence rates. Since GAMG creates its own interpolation based on the matrix and it will be built on the smaller redistributed matrix there will no issue with the wrong "sized" interpolation. Of course you have the overhead of algebraic multigrid and cannot take advantage of geometric multigrid. The GAMG approach may be satisfactory to your needs. >> >> >> >> If you are game for looking more closely at using redistribute with geometric multigrid and PETSc (which will require digging into PETSc source code and using internal information in the PETSc source code) you can start by looking at how we solve variational problems with SNES using reduced space active set methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c This code solves problem 1 see() it builds the entire interpolation and then pulls out the required non-locked part. Reduced space active set methods essentially lock the constrained dof and solve a smaller system without those dof at each iteration. >> >> >> >> But it does not solve problem 2. Moving the rows of the "smaller" interpolation to the correct MPI process based on where PCREDISTRIBUTE moved rows. To do this would requring looking at the PCREDISTRUBUTE code and extracting the information of where each row was moving and performing the process for the interpolation matrix. >> >> src/ksp/pc/impls/redistribute/redistribute.c >> >> >> >> Barry >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users > wrote: >> >> >> >> Hi, >> >> >> >> I'm trying to run an iterative solver (FGMRES for example) with PCMG as preconditioner. The setup of PCMG >> >> is done roughly as in ex42 of the PETSc-tutorials (https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). >> >> Since I have many locked degrees-of-freedom I would like to use PCREDISTRIBUTE. However, this >> >> results in (30039 is the number of DOFs after redistribute and 55539 the number before): >> >> >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> >> [0]PETSC ERROR: Nonconforming object sizes >> >> [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for MatProductType PtAP: A 30039x30039, P 55539x7803 >> >> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. >> >> [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 GIT Date: 2023-04-24 16:37:00 +0200 >> >> [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 >> >> [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 --download-scalapack --download-hdf5 --download-zlib --download-mumps --download-parmetis --download-metis --download-ptscotch --download-hypre --download-spai >> >> [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 >> >> [0]PETSC ERROR: #2 MatProductSetFromOptions() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 >> >> [0]PETSC ERROR: #3 MatPtAP() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 >> >> [0]PETSC ERROR: #4 MatGalerkin() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 >> >> [0]PETSC ERROR: #5 PCSetUp_MG() at /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 >> >> [0]PETSC ERROR: #6 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 >> >> [0]PETSC ERROR: #7 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 >> >> [0]PETSC ERROR: #8 PCSetUp_Redistribute() at /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 >> >> [0]PETSC ERROR: #9 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 >> >> [0]PETSC ERROR: #10 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 >> >> [0]PETSC ERROR: #11 KSPSolve_Private() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 >> >> [0]PETSC ERROR: #12 KSPSolve() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 >> >> >> >> It?s clear what happens I think, and it kind of make since not all levels are redistributed as they should (?). >> >> Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? >> >> >> >> Kind regards, >> >> Carl-Johan >> >> >> >> >> > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsfaraway at gmail.com Fri Jun 30 10:21:00 2023 From: jsfaraway at gmail.com (Runfeng Jin) Date: Fri, 30 Jun 2023 23:21:00 +0800 Subject: [petsc-users] Smaller assemble time with increasing processors Message-ID: Hello! When I use PETSc build a sbaij matrix, I find a strange thing. When I increase the number of processors, the assemble time become smaller. All these are totally same matrix. The assemble time mainly arouse from message passing, which because I use dynamic workload that it is random for which elements are computed by which processor. But from instinct, if use more processors, then more possible that the processor computes elements storing in other processors. But from the output of log_view, It seems when use more processors, the processors compute more elements storing in its local(infer from that, with more processors, less total amount of passed messages). What could cause this happened? Thank you! Following is the output of log_view for 64\128\256 processors. Every row is time profiler of VecAssemblyEnd. ------------------------------------------------------------------------------------------------------------------------ processors Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s 64 1 1.0 2.3775e+02 1.0 0.00e+00 0.0 6.2e+03 2.3e+04 9.0e+00 52 0 1 1 2 52 0 1 1 2 0 128 1 1.0 6.9945e+01 1.0 0.00e+00 0.0 2.5e+04 1.1e+04 9.0e+00 30 0 1 1 2 30 0 1 1 2 0 256 1 1.0 1.7445e+01 1.0 0.00e+00 0.0 9.9e+04 5.2e+03 9.0e+00 10 0 1 1 2 10 0 1 1 2 0 Runfeng Jin -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jun 30 10:35:03 2023 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 30 Jun 2023 11:35:03 -0400 Subject: [petsc-users] Smaller assemble time with increasing processors In-Reply-To: References: Message-ID: <4E68029B-EC0E-4C92-B474-4B997DDE972C@petsc.dev> You cannot look just at the VecAssemblyEnd() time, that will very likely give the wrong impression of the total time it takes to put the values in. You need to register a new Event and put a PetscLogEvent() just before you start generating the vector entries and calling VecSetValues() and put the PetscLogEventEnd() just after the VecAssemblyEnd() this is the only way to get an accurate accounting of the time. Barry > On Jun 30, 2023, at 11:21 AM, Runfeng Jin wrote: > > Hello! > > When I use PETSc build a sbaij matrix, I find a strange thing. When I increase the number of processors, the assemble time become smaller. All these are totally same matrix. The assemble time mainly arouse from message passing, which because I use dynamic workload that it is random for which elements are computed by which processor. > But from instinct, if use more processors, then more possible that the processor computes elements storing in other processors. But from the output of log_view, It seems when use more processors, the processors compute more elements storing in its local(infer from that, with more processors, less total amount of passed messages). > > What could cause this happened? Thank you! > > > Following is the output of log_view for 64\128\256 processors. Every row is time profiler of VecAssemblyEnd. > > ------------------------------------------------------------------------------------------------------------------------ > processors Count Time (sec) Flop --- Global --- --- Stage ---- Total > Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > 64 1 1.0 2.3775e+02 1.0 0.00e+00 0.0 6.2e+03 2.3e+04 9.0e+00 52 0 1 1 2 52 0 1 1 2 0 > 128 1 1.0 6.9945e+01 1.0 0.00e+00 0.0 2.5e+04 1.1e+04 9.0e+00 30 0 1 1 2 30 0 1 1 2 0 > 256 1 1.0 1.7445e+01 1.0 0.00e+00 0.0 9.9e+04 5.2e+03 9.0e+00 10 0 1 1 2 10 0 1 1 2 0 > > Runfeng Jin From carl-johan.thore at liu.se Fri Jun 30 11:08:44 2023 From: carl-johan.thore at liu.se (Carl-Johan Thore) Date: Fri, 30 Jun 2023 16:08:44 +0000 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> <7FDF0ACC-CBF1-427D-B1F2-425D8877DB9D@petsc.dev> Message-ID: ?Possibly, but if you are doing FD, then there is built-in topology in DMDA that is not present in Plex, so finding the neighbors in the right order is harder (possible, but harder, we address this in some new work that is not yet merged). There is also structured adaptive support with DMForest, but this also does not preserve the stencil. ? I?m using an FEM which doesn?t utilize such neighbor info, so perhaps Plex or DMForest could be easier to use then ?The efficiency of active set VI solvers in PETSc demonstrates to me that solving reduced systems can be done efficiently with geometric multigrid using a structured grid so I would not suggest giving up on what you started. You can do it in two steps 1) Use PCREDISTRIBUTE but hack the code in redistribute.c to not move dof between MPI ranks, just have it remove the locked rows/columns (to start just run on one MPI rank since then nothing is moved) Then in your code you just need to pull out the appropriate rows and columns of the interpolation that correspond to the dof you have kept and pass this smaller interpolation to the inner KSP PCMG. This is straightforward and like what is in DMSetVI. The MG convergence should be just as good as on the full system. 2) the only problem with 1 is it is likely to be poorly load balanced (but you can make some runs to see how imbalanced it is, that will depend exactly on what parts are locked and what MPI processes they are on). So if it is poorly balanced then you would need to get out of redistribute.c a mapping for each kept dof to what MPI rank it is moved to and use that to move the entries in the reduced interpolation you have created. If you do succeed it would actually be useful code that we could add to PCREDISTRIBUTE for more general use by others. Barry ? Thanks, that seems doable, if not super easy. I might try that Kind regards /Carl-Johan From: Barry Smith Sent: Friday, June 30, 2023 5:21 PM To: Matthew Knepley Cc: Carl-Johan Thore ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE On Jun 30, 2023, at 10:22 AM, Matthew Knepley > wrote: On Fri, Jun 30, 2023 at 10:16?AM Carl-Johan Thore via petsc-users > wrote: Thanks for the quick reply and the suggestions! ? ? you should first check that the PCMG works quite well ? Yes, the PCMG works very well for the full system. ?I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1.? Yes, it?s slightly different so problem 2 should be solved. It looked somewhat complicated to get PCMG to work with redistribute, so I?ll try with PCGAMG first (it ran immediately with redistribute, but was slower than PCMG on my, very small, test problem. I?ll try to tune the settings). A related question: I?m here using a DMDA for a structured grid but I?m locking so many DOFs that for many of the elements all DOFs are locked. In such a case could it make sense to switch/convert the DMDA to a DMPlex containing only those elements that actually have DOFs? Possibly, but if you are doing FD, then there is built-in topology in DMDA that is not present in Plex, so finding the neighbors in the right order is harder (possible, but harder, we address this in some new work that is not yet merged). There is also structured adaptive support with DMForest, but this also does not preserve the stencil. The efficiency of active set VI solvers in PETSc demonstrates to me that solving reduced systems can be done efficiently with geometric multigrid using a structured grid so I would not suggest giving up on what you started. You can do it in two steps 1) Use PCREDISTRIBUTE but hack the code in redistribute.c to not move dof between MPI ranks, just have it remove the locked rows/columns (to start just run on one MPI rank since then nothing is moved) Then in your code you just need to pull out the appropriate rows and columns of the interpolation that correspond to the dof you have kept and pass this smaller interpolation to the inner KSP PCMG. This is straightforward and like what is in DMSetVI. The MG convergence should be just as good as on the full system. 2) the only problem with 1 is it is likely to be poorly load balanced (but you can make some runs to see how imbalanced it is, that will depend exactly on what parts are locked and what MPI processes they are on). So if it is poorly balanced then you would need to get out of redistribute.c a mapping for each kept dof to what MPI rank it is moved to and use that to move the entries in the reduced interpolation you have created. If you do succeed it would actually be useful code that we could add to PCREDISTRIBUTE for more general use by others. Barry Thanks, Matt From: Barry Smith > Sent: Friday, June 30, 2023 3:57 PM To: Carl-Johan Thore > Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE Oh, I forgot to mention you should first check that the PCMG works quite well for the full system (without the PCREDISTRIBUTE); the convergence on the redistributed system (assuming you did all the work to get PCMG to work for you) should be very similar to (but not measurably better) than the convergence on the full system. On Jun 30, 2023, at 9:17 AM, Barry Smith > wrote: ex42.c provides directly the interpolation/restriction needed to move between levels in the loop for (k = 1; k < nlevels; k++) { PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL)); PetscCall(PCMGSetInterpolation(pc, k, R)); PetscCall(MatDestroy(&R)); } The more standard alternative to this is to call KSPSetDM() and have the PCMG setup use the DM to construct the interpolations (I don't know why ex42.c does this construction itself instead of having the KSPSetDM() process handle it but that doesn't matter). The end result is the same in both cases. Since PCREDISTRIBUTE builds its own new matrix (by using only certain rows and columns of the original matrix) the original interpolation cannot be used for two reasons 1) (since it is for the full system) It is for the wrong problem. 2) In addition, if you ran with ex42.c the inner KSP does not have access to the interpolation that was constructed so you could not get PCMG to to work as indicated below. I am guessing that your code is slightly different than ex42.c because you take the interpolation matrix provided by the DM and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1. So the short answer is that there is no "canned" way to use the PCMG process trivially with PCDISTRIBUTE. To do what you want requires two additional steps 1) after you construct the full interpolation matrix (by using the DM) you need to remove the rows associated with the dof that have been removed by the "locked" variables (and the columns that are associated with coarse grid points that live on the removed points) so that the interpolation is the correct "size" for the smaller problem 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI processes for load balancing after it has removed the locked variables you would need to do the exact same movement for the rows of the interpolation matrix that you have constructed (after you have removed the "locked" rows of the interpolation. Lots of bookkeeping to acheive 1 and 2 but conceptually simple. As an experiment you can try using PCGAMG on the redistributed matrix -redistribute_pc_type gamg to use algebraic multigrid just to see the time and convergence rates. Since GAMG creates its own interpolation based on the matrix and it will be built on the smaller redistributed matrix there will no issue with the wrong "sized" interpolation. Of course you have the overhead of algebraic multigrid and cannot take advantage of geometric multigrid. The GAMG approach may be satisfactory to your needs. If you are game for looking more closely at using redistribute with geometric multigrid and PETSc (which will require digging into PETSc source code and using internal information in the PETSc source code) you can start by looking at how we solve variational problems with SNES using reduced space active set methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c This code solves problem 1 see() it builds the entire interpolation and then pulls out the required non-locked part. Reduced space active set methods essentially lock the constrained dof and solve a smaller system without those dof at each iteration. But it does not solve problem 2. Moving the rows of the "smaller" interpolation to the correct MPI process based on where PCREDISTRIBUTE moved rows. To do this would requring looking at the PCREDISTRUBUTE code and extracting the information of where each row was moving and performing the process for the interpolation matrix. src/ksp/pc/impls/redistribute/redistribute.c Barry On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users > wrote: Hi, I'm trying to run an iterative solver (FGMRES for example) with PCMG as preconditioner. The setup of PCMG is done roughly as in ex42 of the PETSc-tutorials (https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). Since I have many locked degrees-of-freedom I would like to use PCREDISTRIBUTE. However, this results in (30039 is the number of DOFs after redistribute and 55539 the number before): [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Nonconforming object sizes [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for MatProductType PtAP: A 30039x30039, P 55539x7803 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 GIT Date: 2023-04-24 16:37:00 +0200 [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 --download-scalapack --download-hdf5 --download-zlib --download-mumps --download-parmetis --download-metis --download-ptscotch --download-hypre --download-spai [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 [0]PETSC ERROR: #2 MatProductSetFromOptions() at /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 [0]PETSC ERROR: #3 MatPtAP() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 [0]PETSC ERROR: #4 MatGalerkin() at /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 [0]PETSC ERROR: #5 PCSetUp_MG() at /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 [0]PETSC ERROR: #6 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 [0]PETSC ERROR: #7 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 [0]PETSC ERROR: #8 PCSetUp_Redistribute() at /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 [0]PETSC ERROR: #9 PCSetUp() at /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 [0]PETSC ERROR: #10 KSPSetUp() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 [0]PETSC ERROR: #11 KSPSolve_Private() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 [0]PETSC ERROR: #12 KSPSolve() at /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 It?s clear what happens I think, and it kind of make since not all levels are redistributed as they should (?). Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? Kind regards, Carl-Johan -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 30 12:28:21 2023 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 30 Jun 2023 13:28:21 -0400 Subject: [petsc-users] PCMG with PCREDISTRIBUTE In-Reply-To: References: <82DBCDB1-99C2-40A4-9741-D348AC5D5B3A@petsc.dev> <044DFE3A-95D6-48AA-B6DA-FBB228975597@unipv.it> <4347EFD4-D04E-4DEB-8313-B313AF5F4E02@unipv.it> <66740C62-780A-41C5-9228-1B91FE8D4115@petsc.dev> <7FDF0ACC-CBF1-427D-B1F2-425D8877DB9D@petsc.dev> Message-ID: On Fri, Jun 30, 2023 at 12:08?PM Carl-Johan Thore wrote: > ?Possibly, but if you are doing FD, then there is built-in topology in > DMDA that is not present in Plex, so > > finding the neighbors in the right order is harder (possible, but harder, > we address this in some new work that is not yet merged). There is also > structured adaptive support with DMForest, but this also does not > preserve the stencil. > > ? > > > > I?m using an FEM which doesn?t utilize such neighbor info, so perhaps Plex > or DMForest could be easier to use then > Oh yes. Then I would recommend at least looking at Plex and Forest. FEM is much better supported than in DMDA. By the end of the year, I should have everything in place to seamlessly support libCEED for assembly as well. Matt > ?The efficiency of active set VI solvers in PETSc demonstrates to me that > solving reduced systems can be done efficiently with geometric multigrid > using a structured grid so I would not suggest giving up on what you > started. > > > > You can do it in two steps > > > > 1) Use PCREDISTRIBUTE but hack the code in redistribute.c to not move dof > between MPI ranks, just have it remove the locked rows/columns (to start > just run on one MPI rank since then nothing is moved) Then in your code > you just need to pull out the appropriate rows and columns of the > interpolation that correspond to the dof you have kept and pass this > smaller interpolation to the inner KSP PCMG. This is straightforward and > like what is in DMSetVI. The MG convergence should be just as good as on > the full system. > > > > 2) the only problem with 1 is it is likely to be poorly load balanced (but > you can make some runs to see how imbalanced it is, that will depend > exactly on what parts are locked and what MPI processes they are on). So > if it is poorly balanced then you would need to get out of redistribute.c a > mapping for each kept dof to what MPI rank it is moved to and use that to > move the entries in the reduced interpolation you have created. > > > > If you do succeed it would actually be useful code that we could add to > PCREDISTRIBUTE for more general use by others. > > > > Barry > > ? > > > > Thanks, that seems doable, if not super easy. I might try that > > > > Kind regards > > /Carl-Johan > > > > *From:* Barry Smith > *Sent:* Friday, June 30, 2023 5:21 PM > *To:* Matthew Knepley > *Cc:* Carl-Johan Thore ; petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] PCMG with PCREDISTRIBUTE > > > > > > > > On Jun 30, 2023, at 10:22 AM, Matthew Knepley wrote: > > > > On Fri, Jun 30, 2023 at 10:16?AM Carl-Johan Thore via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Thanks for the quick reply and the suggestions! > > > > ? ? you should first check that the PCMG works quite well ? > > > > Yes, the PCMG works very well for the full system. > > > > ?I am guessing that your code is slightly different than ex42.c because > you take the interpolation matrix provided by the DM > > and give it to the inner KSP PCMG?. So you solve problem 2 but not problem > 1.? > > > > Yes, it?s slightly different so problem 2 should be solved. > > > > It looked somewhat complicated to get PCMG to work with redistribute, so > I?ll try with PCGAMG first > > (it ran immediately with redistribute, but was slower than PCMG on my, > very small, test problem. I?ll try > > to tune the settings). > > > > A related question: I?m here using a DMDA for a structured grid but I?m > locking so many DOFs that for many of the elements > > all DOFs are locked. In such a case could it make sense to switch/convert > the DMDA to a DMPlex containing only those > > elements that actually have DOFs? > > > > Possibly, but if you are doing FD, then there is built-in topology in DMDA > that is not present in Plex, so > > finding the neighbors in the right order is harder (possible, but harder, > we address this in some new work that is not yet merged). There is also > structured adaptive support with DMForest, but this also does not > preserve the stencil. > > > > The efficiency of active set VI solvers in PETSc demonstrates to me > that solving reduced systems can be done efficiently with geometric > multigrid using a structured grid so I would not suggest giving up on what > you started. > > > > You can do it in two steps > > > > 1) Use PCREDISTRIBUTE but hack the code in redistribute.c to not move dof > between MPI ranks, just have it remove the locked rows/columns (to start > just run on one MPI rank since then nothing is moved) Then in your code > you just need to pull out the appropriate rows and columns of the > interpolation that correspond to the dof you have kept and pass this > smaller interpolation to the inner KSP PCMG. This is straightforward and > like what is in DMSetVI. The MG convergence should be just as good as on > the full system. > > > > 2) the only problem with 1 is it is likely to be poorly load balanced (but > you can make some runs to see how imbalanced it is, that will depend > exactly on what parts are locked and what MPI processes they are on). So > if it is poorly balanced then you would need to get out of redistribute.c a > mapping for each kept dof to what MPI rank it is moved to and use that to > move the entries in the reduced interpolation you have created. > > > > If you do succeed it would actually be useful code that we could add to > PCREDISTRIBUTE for more general use by others. > > > > Barry > > > > > > > > > > Thanks, > > > > Matt > > > > *From:* Barry Smith > *Sent:* Friday, June 30, 2023 3:57 PM > *To:* Carl-Johan Thore > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] PCMG with PCREDISTRIBUTE > > > > > > Oh, I forgot to mention you should first check that the PCMG works > quite well for the full system (without the PCREDISTRIBUTE); the convergence > > on the redistributed system (assuming you did all the work to get PCMG to > work for you) should be very similar to (but not measurably better) than > the convergence on the full system. > > > > > > > > On Jun 30, 2023, at 9:17 AM, Barry Smith wrote: > > > > > > ex42.c provides directly the interpolation/restriction needed to move > between levels in the loop > > > > for (k = 1; k < nlevels; k++) { > > PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL)); > > PetscCall(PCMGSetInterpolation(pc, k, R)); > > PetscCall(MatDestroy(&R)); > > } > > > > The more standard alternative to this is to call KSPSetDM() and have the > PCMG setup use the DM > > to construct the interpolations (I don't know why ex42.c does this > construction itself instead of having the KSPSetDM() process handle it but > that doesn't matter). The end result is the same in both cases. > > > > Since PCREDISTRIBUTE builds its own new matrix (by using only certain > rows and columns of the original matrix) the original interpolation > > cannot be used for two reasons > > > > 1) (since it is for the full system) It is for the wrong problem. > > > > 2) In addition, if you ran with ex42.c the inner KSP does not have access > to the interpolation that was constructed so you could not get PCMG to to > work as indicated below. > > > > I am guessing that your code is slightly different than ex42.c because you > take the interpolation matrix provided by the DM > > and give it to the inner KSP PCMG?. So you solve problem 2 but not problem > 1. > > > > So the short answer is that there is no "canned" way to use the PCMG > process trivially with PCDISTRIBUTE. > > > > To do what you want requires two additional steps > > > > 1) after you construct the full interpolation matrix (by using the DM) > you need to remove the rows associated with the dof that have been removed > by the "locked" variables (and the columns that are associated with coarse > grid points that live on the removed points) so that the interpolation is > the correct "size" for the smaller problem > > > > 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI > processes for load balancing after it has removed the locked variables you > would need to do the exact same movement for the rows of the interpolation > matrix that you have constructed (after you have removed the "locked" rows > of the interpolation. > > > > Lots of bookkeeping to acheive 1 and 2 but conceptually simple. > > > > As an experiment you can try using PCGAMG on the redistributed matrix > -redistribute_pc_type gamg to use algebraic multigrid just to see the time > and convergence rates. Since GAMG creates its own interpolation based on > the matrix and it will be built on the smaller redistributed matrix there > will no issue with the wrong "sized" interpolation. Of course you have the > overhead of algebraic multigrid and cannot take advantage of geometric > multigrid. The GAMG approach may be satisfactory to your needs. > > > > If you are game for looking more closely at using redistribute with > geometric multigrid and PETSc (which will require digging into PETSc source > code and using internal information in the PETSc source code) you can start > by looking at how we solve variational problems with SNES using reduced > space active set methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c > This code solves problem 1 see() it builds the entire interpolation and > then pulls out the required non-locked part. Reduced space active set > methods essentially lock the constrained dof and solve a smaller system > without those dof at each iteration. > > > > But it does not solve problem 2. Moving the rows of the "smaller" > interpolation to the correct MPI process based on where PCREDISTRIBUTE > moved rows. To do this would requring looking at the PCREDISTRUBUTE code > and extracting the information of where each row was moving and performing > the process for the interpolation matrix. > > src/ksp/pc/impls/redistribute/redistribute.c > > > > Barry > > > > > > > > > > > > > > > > > > > > On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > > Hi, > > > > I'm trying to run an iterative solver (FGMRES for example) with PCMG as > preconditioner. The setup of PCMG > > is done roughly as in ex42 of the PETSc-tutorials ( > https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html). > > Since I have many locked degrees-of-freedom I would like to use > PCREDISTRIBUTE. However, this > > results in (30039 is the number of DOFs after redistribute and 55539 the > number before): > > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [0]PETSC ERROR: Nonconforming object sizes > > [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for > MatProductType PtAP: A 30039x30039, P 55539x7803 > > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > > [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4 > GIT Date: 2023-04-24 16:37:00 +0200 > > [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023 > > [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" > CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" > CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 > --download-scalapack --download-hdf5 --download-zlib --download-mumps > --download-parmetis --download-metis --download-ptscotch --download-hypre > --download-spai > > [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at > /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420 > > [0]PETSC ERROR: #2 MatProductSetFromOptions() at > /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541 > > [0]PETSC ERROR: #3 MatPtAP() at > /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868 > > [0]PETSC ERROR: #4 MatGalerkin() at > /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899 > > [0]PETSC ERROR: #5 PCSetUp_MG() at > /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029 > > [0]PETSC ERROR: #6 PCSetUp() at > /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 > > [0]PETSC ERROR: #7 KSPSetUp() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 > > [0]PETSC ERROR: #8 PCSetUp_Redistribute() at > /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327 > > [0]PETSC ERROR: #9 PCSetUp() at > /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994 > > [0]PETSC ERROR: #10 KSPSetUp() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406 > > [0]PETSC ERROR: #11 KSPSolve_Private() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824 > > [0]PETSC ERROR: #12 KSPSolve() at > /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070 > > > > It?s clear what happens I think, and it kind of make since not all levels > are redistributed as they should (?). > > Is it possible to use PCMG with PCREDISTRIBUTE in an easy way? > > > > Kind regards, > > Carl-Johan > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Fri Jun 30 15:39:39 2023 From: liufield at gmail.com (neil liu) Date: Fri, 30 Jun 2023 16:39:39 -0400 Subject: [petsc-users] Inquiry about reading the P2 tetrahedron mesh from GMSH Message-ID: Dear Petsc developers, I am reading P2 mesh from GMSH. And used DMFieldGetClosure_Internal to check the coordinates for each tetrahedron, It seems reasonable. But when I tried DMGetCoordinates (dm, &global), it seems the vector global is not consistent with the node number, Then what is global here? Thanks, Xiaodong -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsfaraway at gmail.com Fri Jun 30 21:25:48 2023 From: jsfaraway at gmail.com (Runfeng Jin) Date: Sat, 1 Jul 2023 10:25:48 +0800 Subject: [petsc-users] Fwd: Smaller assemble time with increasing processors In-Reply-To: References: <4E68029B-EC0E-4C92-B474-4B997DDE972C@petsc.dev> Message-ID: Hi, Thanks for your reply. I try to use PetscLogEvent(), and the result shows same conclusion. What I have done is : ---------------- PetscLogEvent Mat_assemble_event, Mat_setvalue_event, Mat_setAsse_event; PetscClassId classid; PetscLogDouble user_event_flops; PetscClassIdRegister("Test assemble and set value", &classid); PetscLogEventRegister("Test only assemble", classid, &Mat_assemble_event); PetscLogEventRegister("Test only set values", classid, &Mat_setvalue_event); PetscLogEventRegister("Test both assemble and set values", classid, &Mat_setAsse_event); PetscLogEventBegin(Mat_setAsse_event, 0, 0, 0, 0); PetscLogEventBegin(Mat_setvalue_event, 0, 0, 0, 0); ...compute elements and use MatSetValues. No call for assembly PetscLogEventEnd(Mat_setvalue_event, 0, 0, 0, 0); PetscLogEventBegin(Mat_assemble_event, 0, 0, 0, 0); MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); PetscLogEventEnd(Mat_assemble_event, 0, 0, 0, 0); PetscLogEventEnd(Mat_setAsse_event, 0, 0, 0, 0); ---------------- And the output as follows. By the way, dose petsc recorde all time between PetscLogEventBegin and PetscLogEventEnd? or just test the time of petsc API? ---------------- Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio *Max* Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s 64new 1 1.0 *2.3775e+02* 1.0 0.00e+00 0.0 6.2e+03 2.3e+04 9.0e+00 52 0 1 1 2 52 0 1 1 2 0 128new 1 1.0* 6.9945e+01* 1.0 0.00e+00 0.0 2.5e+04 1.1e+04 9.0e+00 30 0 1 1 2 30 0 1 1 2 0 256new 1 1.0 *1.7445e+01* 1.0 0.00e+00 0.0 9.9e+04 5.2e+03 9.0e+00 10 0 1 1 2 10 0 1 1 2 0 64: only assemble 1 1.0 *2.6596e+02 *1.0 0.00e+00 0.0 7.0e+03 2.8e+05 1.1e+01 55 0 1 8 3 55 0 1 8 3 0 only setvalues 1 1.0 *1.9987e+02* 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 41 0 0 0 0 41 0 0 0 0 0 Test both 1 1.0 4.*6580e+02* 1.0 0.00e+00 0.0 7.0e+03 2.8e+05 1.5e+01 96 0 1 8 4 96 0 1 8 4 0 128: only assemble 1 1.0 *6.9718e+01* 1.0 0.00e+00 0.0 2.6e+04 8.1e+04 1.1e+01 30 0 1 4 3 30 0 1 4 3 0 only setvalues 1 1.0 *1.4438e+02* 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 60 0 0 0 0 60 0 0 0 0 0 Test both 1 1.0 *2.1417e+02* 1.0 0.00e+00 0.0 2.6e+04 8.1e+04 1.5e+01 91 0 1 4 4 91 0 1 4 4 0 256: only assemble 1 1.0 *1.7482e+01* 1.0 0.00e+00 0.0 1.0e+05 2.3e+04 1.1e+01 10 0 1 3 3 10 0 1 3 3 0 only setvalues 1 1.0 *1.3717e+02* 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 78 0 0 0 0 78 0 0 0 0 0 Test both 1 1.0 *1.5475e+02* 1.0 0.00e+00 0.0 1.0e+05 2.3e+04 1.5e+01 91 0 1 3 4 91 0 1 3 4 0 Runfeng Barry Smith ?2023?6?30??? 23:35??? > > You cannot look just at the VecAssemblyEnd() time, that will very > likely give the wrong impression of the total time it takes to put the > values in. > > You need to register a new Event and put a PetscLogEvent() just before > you start generating the vector entries and calling VecSetValues() and put > the PetscLogEventEnd() just after the VecAssemblyEnd() this is the only way > to get an accurate accounting of the time. > > Barry > > > > On Jun 30, 2023, at 11:21 AM, Runfeng Jin wrote: > > > > Hello! > > > > When I use PETSc build a sbaij matrix, I find a strange thing. When I > increase the number of processors, the assemble time become smaller. All > these are totally same matrix. The assemble time mainly arouse from message > passing, which because I use dynamic workload that it is random for which > elements are computed by which processor. > > But from instinct, if use more processors, then more possible that the > processor computes elements storing in other processors. But from the > output of log_view, It seems when use more processors, the processors > compute more elements storing in its local(infer from that, with more > processors, less total amount of passed messages). > > > > What could cause this happened? Thank you! > > > > > > Following is the output of log_view for 64\128\256 processors. Every > row is time profiler of VecAssemblyEnd. > > > > > ------------------------------------------------------------------------------------------------------------------------ > > processors Count Time (sec) > Flop > --- Global --- --- Stage > ---- Total > > Max Ratio Max > Ratio Max Ratio Mess AvgLen Reduct > %T %F %M %L %R %T %F %M %L %R Mflop/s > > 64 1 1.0 2.3775e+02 1.0 > 0.00e+00 0.0 6.2e+03 2.3e+04 9.0e+00 > 52 0 1 1 2 52 0 1 1 2 > 0 > > 128 1 1.0 6.9945e+01 1.0 > 0.00e+00 0.0 2.5e+04 1.1e+04 9.0e+00 > 30 0 1 1 2 30 0 1 1 2 > 0 > > 256 1 1.0 1.7445e+01 1.0 > 0.00e+00 0.0 9.9e+04 5.2e+03 9.0e+00 > 10 0 1 1 2 10 0 1 1 2 > 0 > > > > Runfeng Jin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: