From pierre.seize at onera.fr Wed Apr 1 04:34:19 2020 From: pierre.seize at onera.fr (Pierre Seize) Date: Wed, 01 Apr 2020 11:34:19 +0200 Subject: [petsc-users] DMPLEX Field name changes after getting a Vec Message-ID: <620b28002e914e0a561cd175ecab4bf8@onera.fr> Hello everyone. I noticed a strange feature, and I don't know if it is a small bug or if it is expected behaviour. I have a DMPLEX read from a file, and I add a PetscFV field to it. For debugging purposes I want to name this PetscFV with PetscObjectSetName. But when later I create a global vector, the name of the field gets erased. Is it to be expected ? Here is a minimal working example: #include int main(int argc, char **argv){ PetscErrorCode ierr; DM dm; PetscFV fvm; Vec x; ierr = PetscInitialize(&argc, &argv, NULL, NULL); if (ierr) return ierr; ierr = DMCreate(PETSC_COMM_WORLD, &dm); CHKERRQ(ierr); ierr = DMSetType(dm, DMPLEX); CHKERRQ(ierr); ierr = PetscFVCreate(PETSC_COMM_WORLD, &fvm); CHKERRQ(ierr); ierr = DMAddField(dm, NULL, (PetscObject) fvm); CHKERRQ(ierr); ierr = PetscObjectSetName((PetscObject) fvm, "FV Model"); CHKERRQ(ierr); ierr = DMView(dm, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); <- Here the field is named "FV Model" ierr = DMCreateGlobalVector(dm, &x); CHKERRQ(ierr); ierr = DMView(dm, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); <- Here the field is named "Field_0" ierr = VecDestroy(&x); CHKERRQ(ierr); ierr = PetscFVDestroy(&fvm); CHKERRQ(ierr); ierr = DMDestroy(&dm); CHKERRQ(ierr); ierr = PetscFinalize(); return ierr; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 1 06:15:50 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 1 Apr 2020 07:15:50 -0400 Subject: [petsc-users] DMPLEX Field name changes after getting a Vec In-Reply-To: <620b28002e914e0a561cd175ecab4bf8@onera.fr> References: <620b28002e914e0a561cd175ecab4bf8@onera.fr> Message-ID: On Wed, Apr 1, 2020 at 5:34 AM Pierre Seize wrote: > Hello everyone. > > I noticed a strange feature, and I don't know if it is a small bug or if > it is expected behaviour. > > I have a DMPLEX read from a file, and I add a PetscFV field to it. For > debugging purposes I want to name this PetscFV with PetscObjectSetName. But > when later I create a global vector, the name of the field gets erased. Is > it to be expected ? > > You are right. It is a bug. I have made a merge request for this fix here: https://gitlab.com/petsc/petsc/-/merge_requests/2670 and it seems to work on your example: master *:~/Downloads/tmp/Pierre$ ./names DM Object: Mesh 1 MPI processes type: plex Mesh in 0 dimensions: Labels: celltype: 0 strata with value/size () Field FV Model: adjacency FVM DM Object: Mesh 1 MPI processes type: plex Mesh in 0 dimensions: Labels: celltype: 0 strata with value/size () Field FV Model: adjacency FVM The reason for this behavior was that originally, FEM people named the fields individually, but FV people tended to have one field with the physical fields being components so that they could have a single Riemann solve for all fields. Now you can name field components, so this does not matter anymore. Thanks, Matt > Here is a minimal working example: > > #include > int main(int argc, char **argv){ > PetscErrorCode ierr; > > DM dm; > PetscFV fvm; > Vec x; > > ierr = PetscInitialize(&argc, &argv, NULL, NULL); if (ierr) return ierr; > ierr = DMCreate(PETSC_COMM_WORLD, &dm); CHKERRQ(ierr); > ierr = DMSetType(dm, DMPLEX); CHKERRQ(ierr); > ierr = PetscFVCreate(PETSC_COMM_WORLD, &fvm); CHKERRQ(ierr); > ierr = DMAddField(dm, NULL, (PetscObject) fvm); CHKERRQ(ierr); > ierr = PetscObjectSetName((PetscObject) fvm, "FV Model"); CHKERRQ(ierr); > ierr = DMView(dm, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); > <- Here the field is named "FV Model" > ierr = DMCreateGlobalVector(dm, &x); CHKERRQ(ierr); > ierr = DMView(dm, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); > <- Here the field is named "Field_0" > > > ierr = VecDestroy(&x); CHKERRQ(ierr); > ierr = PetscFVDestroy(&fvm); CHKERRQ(ierr); > ierr = DMDestroy(&dm); CHKERRQ(ierr); > ierr = PetscFinalize(); > return ierr; > } > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.seize at onera.fr Wed Apr 1 08:46:03 2020 From: pierre.seize at onera.fr (Pierre Seize) Date: Wed, 01 Apr 2020 15:46:03 +0200 Subject: [petsc-users] DMPLEX Field name changes after getting a Vec In-Reply-To: References: <620b28002e914e0a561cd175ecab4bf8@onera.fr> Message-ID: Thank you, the fix works fine indeed. Pierre Seize Le 2020-04-01 13:15, Matthew Knepley a ?crit : > On Wed, Apr 1, 2020 at 5:34 AM Pierre Seize wrote: > >> Hello everyone. >> >> I noticed a strange feature, and I don't know if it is a small bug or if it is expected behaviour. >> >> I have a DMPLEX read from a file, and I add a PetscFV field to it. For debugging purposes I want to name this PetscFV with PetscObjectSetName. But when later I create a global vector, the name of the field gets erased. Is it to be expected ? > > You are right. It is a bug. I have made a merge request for this fix here: > > https://gitlab.com/petsc/petsc/-/merge_requests/2670 > > and it seems to work on your example: > > master *:~/Downloads/tmp/Pierre$ ./names > > DM Object: Mesh 1 MPI processes > > type: plex > > Mesh in 0 dimensions: > > Labels: > > celltype: 0 strata with value/size () > > Field FV Model: > > adjacency FVM > > DM Object: Mesh 1 MPI processes > > type: plex > > Mesh in 0 dimensions: > > Labels: > > celltype: 0 strata with value/size () > > Field FV Model: > > adjacency FVM > > The reason for this behavior was that originally, FEM people named the fields individually, but > FV people tended to have one field with the physical fields being components so that they could > have a single Riemann solve for all fields. Now you can name field components, so this does not > matter anymore. > > Thanks, > > Matt > >> Here is a minimal working example: >> >> #include >> int main(int argc, char **argv){ >> PetscErrorCode ierr; >> >> DM dm; >> PetscFV fvm; >> Vec x; >> >> ierr = PetscInitialize(&argc, &argv, NULL, NULL); if (ierr) return ierr; >> ierr = DMCreate(PETSC_COMM_WORLD, &dm); CHKERRQ(ierr); >> ierr = DMSetType(dm, DMPLEX); CHKERRQ(ierr); >> ierr = PetscFVCreate(PETSC_COMM_WORLD, &fvm); CHKERRQ(ierr); >> ierr = DMAddField(dm, NULL, (PetscObject) fvm); CHKERRQ(ierr); >> ierr = PetscObjectSetName((PetscObject) fvm, "FV Model"); CHKERRQ(ierr); >> ierr = DMView(dm, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); <- Here the field is named "FV Model" >> ierr = DMCreateGlobalVector(dm, &x); CHKERRQ(ierr); >> ierr = DMView(dm, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); <- Here the field is named "Field_0" >> >> ierr = VecDestroy(&x); CHKERRQ(ierr); >> ierr = PetscFVDestroy(&fvm); CHKERRQ(ierr); >> ierr = DMDestroy(&dm); CHKERRQ(ierr); >> ierr = PetscFinalize(); >> return ierr; >> } > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ [1] Links: ------ [1] http://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Fabian.Jakub at physik.uni-muenchen.de Wed Apr 1 09:59:36 2020 From: Fabian.Jakub at physik.uni-muenchen.de (Fabian Jakub) Date: Wed, 1 Apr 2020 16:59:36 +0200 Subject: [petsc-users] DMPlex assemble submatrices with overlapping dofs Message-ID: <47c6cfdd-5f42-1e14-a002-bd0ecf8cb5a5@physik.uni-muenchen.de> Dear Matt, dear PETSc Devs, I have a dmplex mesh with wedge type cells (2d triangle mesh extruded in the vertical, i.e. prism cells). Let me call the faces with 3 edges "top/bot" faces and the faces with 4 edges "side" faces The mesh can be subdivided into two regions. The upper part where a petsc section has only dofs on the "top/bot" faces and a lower part where i have dofs on the "side" faces as well. At the moment I take care of this layout when I assemble the matrices. I was wondering if I could assemble a submatrix for each part, then create a combined matrix from the two regions. What I tried so far was to clone the dm twice (topdm and botdm) generate a section with the corresponding layouts e.g. now topdm has 2 cells with dofs on 3 faces botdm has 1 cell with dofs on 5 faces DMCompositeCreate() DMCompositeAddDM(topdm and botdm) Now, looking at the matrix generated on the composite DM or calling DMCompositeGetGlobalISs I see that the dofs on the face between the two sub meshes are not shared. My question is: Is this generally the way to go about it or do you suggest a completely different approach i.e. instead of "sharing" the dofs on the interfacing faces I could "couple" the meshes with matrix entries in the composite Mat Does that sound more practical? Or how should I go about telling the composite DM about the shared dofs? Many Thanks! Fabian From knepley at gmail.com Wed Apr 1 11:47:08 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 1 Apr 2020 12:47:08 -0400 Subject: [petsc-users] DMPlex assemble submatrices with overlapping dofs In-Reply-To: <47c6cfdd-5f42-1e14-a002-bd0ecf8cb5a5@physik.uni-muenchen.de> References: <47c6cfdd-5f42-1e14-a002-bd0ecf8cb5a5@physik.uni-muenchen.de> Message-ID: On Wed, Apr 1, 2020 at 10:58 AM Fabian Jakub < Fabian.Jakub at physik.uni-muenchen.de> wrote: > Dear Matt, dear PETSc Devs, > > > I have a dmplex mesh with wedge type cells (2d triangle mesh extruded in > the vertical, i.e. prism cells). > > Let me call the faces with 3 edges "top/bot" faces and the faces with 4 > edges "side" faces > > The mesh can be subdivided into two regions. > > The upper part where a petsc section has only dofs on the "top/bot" > faces and a lower part where i have dofs on the "side" faces as well. > > At the moment I take care of this layout when I assemble the matrices. > > I was wondering if I could assemble a submatrix for each part, then > create a combined matrix from the two regions. > Would it be good enough to just get a custom ordering for the dofs, so that your combined matrix could look like you want? If so, you can give the PetscSection object for your DM a permutation that orders the points the way you want, first points on the top, then on the bottom, then in between. Thanks, Matt > What I tried so far was to > > clone the dm twice (topdm and botdm) > > generate a section with the corresponding layouts > > e.g. now > > topdm has 2 cells with dofs on 3 faces > > botdm has 1 cell with dofs on 5 faces > > DMCompositeCreate() > > DMCompositeAddDM(topdm and botdm) > > > Now, looking at the matrix generated on the composite DM > > or calling DMCompositeGetGlobalISs > > I see that the dofs on the face between the two sub meshes are not shared. > > My question is: > > Is this generally the way to go about it or do you suggest a completely > different approach > > i.e. instead of "sharing" the dofs on the interfacing faces I could > "couple" the meshes with matrix entries in the composite Mat > > Does that sound more practical? > > Or how should I go about telling the composite DM about the shared dofs? > > Many Thanks! > > Fabian > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Wed Apr 1 20:30:39 2020 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 1 Apr 2020 18:30:39 -0700 Subject: [petsc-users] Best way to setup TS to use TSBASICSYMPLECTIC Message-ID: Hello, What is the suggested way to used TSBASICSYMPLECTIC. Lines below from ./src/dm/impls/swarm/examples/tests/ex5.c:[430-431] /* DM needs to be set before splits so it propogates to sub TSs */ ierr = TSSetDM(ts, sw);CHKERRQ(ierr); ierr = TSSetType(ts,TSBASICSYMPLECTIC);CHKERRQ(ierr); seem to suggest that DM is necessary to use TSBASICSYMPLECTIC solver. I am trying to test the solve for a system of coupled particles with all to all position dependent interaction: For a set of N particles with i=1 to N ddt(x_i) = v_i ; ddt(v_i) = (J x X)_i // i th component of matrix J times position vector X; where J is the coupling matrix and X is the position vector of all particles. I tried testing with code similar to ./src/ts/examples/tutorials/hamiltonian/ex1.c where J will be a diagonal matrix with -omega*omgea as diagonal elements. Thanks REddy -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 2 06:12:30 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 2 Apr 2020 07:12:30 -0400 Subject: [petsc-users] Best way to setup TS to use TSBASICSYMPLECTIC In-Reply-To: References: Message-ID: On Wed, Apr 1, 2020 at 9:31 PM Dharmendar Reddy wrote: > Hello, > What is the suggested way to used TSBASICSYMPLECTIC. > > Lines below from ./src/dm/impls/swarm/examples/tests/ex5.c:[430-431] > > /* DM needs to be set before splits so it propogates to sub TSs */ > ierr = TSSetDM(ts, sw);CHKERRQ(ierr); > ierr = TSSetType(ts,TSBASICSYMPLECTIC);CHKERRQ(ierr); > > seem to suggest that DM is necessary to use TSBASICSYMPLECTIC solver. > It is not. > I am trying to test the solve for a system of coupled particles with all > to all position dependent interaction: > For a set of N particles with i=1 to N > ddt(x_i) = v_i ; > > ddt(v_i) = (J x X)_i // i th component of matrix J times position vector X; > > where J is the coupling matrix and X is the position vector of all > particles. > > I tried testing with code similar to > ./src/ts/examples/tutorials/hamiltonian/ex1.c > > where J will be a diagonal matrix with -omega*omgea as diagonal elements. > What is the problem? Thanks, Matt > Thanks > REddy > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmbooker at uwaterloo.ca Thu Apr 2 12:31:31 2020 From: kmbooker at uwaterloo.ca (Kyle Michael Booker) Date: Thu, 2 Apr 2020 17:31:31 +0000 Subject: [petsc-users] Error Installing Petsc4Py Message-ID: Hello, I've been trying to install petsc4py for the past several days and have had the following errors installing: "Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-0eWEiE/petsc4py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-oUZMRL-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-0eWEiE/petsc4py/" I currently have petsc installed correctly as well as mpich, it is just getting petsc4py installed that I have problems with. Any help would be greatly appreciated. Sincerely, Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Apr 2 12:47:55 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 2 Apr 2020 12:47:55 -0500 (CDT) Subject: [petsc-users] Error Installing Petsc4Py In-Reply-To: References: Message-ID: Which version of petsc and which version of petsc4py is this? Lisandro can help with pip issues. Just want to mention an alternate mode that you might want to try: --download-petsc4py option to petsc configure Satish On Thu, 2 Apr 2020, Kyle Michael Booker wrote: > Hello, > > > I've been trying to install petsc4py for the past several days and have had the following errors installing: > > > "Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-0eWEiE/petsc4py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-oUZMRL-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-0eWEiE/petsc4py/" > > > I currently have petsc installed correctly as well as mpich, it is just getting petsc4py installed that I have problems with. Any help would be greatly appreciated. > > > Sincerely, > > > Kyle > From dalcinl at gmail.com Thu Apr 2 15:47:28 2020 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 2 Apr 2020 23:47:28 +0300 Subject: [petsc-users] Error Installing Petsc4Py In-Reply-To: References: Message-ID: Please note that I have not released yet petsc4py-3.13, so pip install will not work with latest PETSc release 3.13 On Thu, 2 Apr 2020 at 20:48, Satish Balay via petsc-users < petsc-users at mcs.anl.gov> wrote: > Which version of petsc and which version of petsc4py is this? > > Lisandro can help with pip issues. > > Just want to mention an alternate mode that you might want to try: > > --download-petsc4py option to petsc configure > > Satish > > On Thu, 2 Apr 2020, Kyle Michael Booker wrote: > > > Hello, > > > > > > I've been trying to install petsc4py for the past several days and have > had the following errors installing: > > > > > > "Command "/usr/bin/python -u -c "import setuptools, > tokenize;__file__='/tmp/pip-build-0eWEiE/petsc4py/setup.py';f=getattr(tokenize, > 'open', open)(__file__);code=f.read().replace('\r\n', > '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record > /tmp/pip-oUZMRL-record/install-record.txt > --single-version-externally-managed --compile --user --prefix=" failed with > error code 1 in /tmp/pip-build-0eWEiE/petsc4py/" > > > > > > I currently have petsc installed correctly as well as mpich, it is just > getting petsc4py installed that I have problems with. Any help would be > greatly appreciated. > > > > > > Sincerely, > > > > > > Kyle > > > > -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexprescott at email.arizona.edu Thu Apr 2 22:32:29 2020 From: alexprescott at email.arizona.edu (Alexander B Prescott) Date: Thu, 2 Apr 2020 20:32:29 -0700 Subject: [petsc-users] Zero diagonal Error Code with all non-zero diagonal entries Message-ID: Hello, I am teaching myself how to use Petsc for nonlinear equations and I've run into a problem that I can't quite figure out. I am trying to use the matrix coloring routines for the finite difference Jacobian approximation, and I've followed the steps in the manual to do this. When I run the program with a MG preconditioner, I get back the error: [0]PETSC ERROR: --------------------- Error Message ---------------------------- [0]PETSC ERROR: Arguments are incompatible [0]PETSC ERROR: Zero diagonal on row 0 ..... What's interesting is that after I've added non-zero entries to the matrix with MatrixSetValues() and assembled the matrix with MatAssemblyBegin() + MatAssemblyEnd(), I can verify that every diagonal entry is non-zero with a call to MatGetValues. I've included a relevant code snippet below and I'm happy to send more. Any guidance is greatly appreciated. Command line: mpirun -n 1 ./program -snes_view -snes_converged_reason -snes_monitor -ksp_monitor -ksp_converged_reason -pc_type mg Code snippet: ierr = FormJacobianColoring(snes,J);CHKERRQ(ierr); // this function set's matrix values and assembles the matrix // removed the code, but this is where I've used MatGetValues() to ensure that the diagonal of J (as well as other entries) has been set to 1.0 ierr = MatColoringCreate(J,&coloring);CHKERRQ(ierr); ierr = MatColoringSetType(coloring,MATCOLORINGSL);CHKERRQ(ierr); ierr = MatColoringSetFromOptions(coloring);CHKERRQ(ierr); ierr = MatColoringApply(coloring,&iscoloring);CHKERRQ(ierr); ierr = MatColoringDestroy(&coloring);CHKERRQ(ierr); /* Create the data structure that SNESComputeJacobianDefaultColor() uses to compute the actual Jacobians via finite differences. */ ierr = MatFDColoringCreate(J,iscoloring,&fdcoloring);CHKERRQ(ierr); ierr = MatFDColoringSetFunction(fdcoloring,(PetscErrorCode (*)(void))FormFunction,NULL);CHKERRQ(ierr); ierr = MatFDColoringSetFromOptions(fdcoloring);CHKERRQ(ierr); ierr = MatFDColoringSetUp(J,iscoloring,fdcoloring);CHKERRQ(ierr); ierr = SNESSetJacobian(snes,J,J,SNESComputeJacobianDefaultColor,fdcoloring);CHKERRQ(ierr); ierr = SNESSetFromOptions(snes);CHKERRQ(ierr); ierr = VecSet(x,0.001);CHKERRQ(ierr); ierr = SNESSolve(snes,NULL,x);CHKERRQ(ierr); And here's the full error code: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Arguments are incompatible [0]PETSC ERROR: Zero diagonal on row 0 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 [0]PETSC ERROR: ./petsc_flowroute on a named i0n22 by alexprescott Thu Apr 2 20:14:01 2020 [0]PETSC ERROR: Configure options --prefix=/cm/shared/uaapps/petsc/3.10.3 --download-fblaslapack --download-metis --download-parmetis --download-hypre PETSC_ARCH=linux-gnu --with-debugging=no COPTFLAGS=-O3 CXXOPTFLAGS=-O3 [0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1662 in /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1693 in /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: #3 MatSOR() line 3932 in /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/interface/matrix.c [0]PETSC ERROR: #4 PCApply_SOR() line 31 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/sor/sor.c [0]PETSC ERROR: #5 PCApply() line 462 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #6 KSP_PCApply() line 281 in /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h [0]PETSC ERROR: #7 KSPInitialResidual() line 67 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: #8 KSPSolve_GMRES() line 233 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: #9 KSPSolve() line 780 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 367 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/cheby/cheby.c [0]PETSC ERROR: #11 KSPSolve() line 780 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #12 PCMGMCycle_Private() line 20 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #13 PCApply_MG() line 377 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #14 PCApply() line 462 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #15 KSP_PCApply() line 281 in /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h [0]PETSC ERROR: #16 KSPInitialResidual() line 67 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: #17 KSPSolve_GMRES() line 233 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: #18 KSPSolve() line 780 in /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #19 SNESSolve_NEWTONLS() line 224 in /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/impls/ls/ls.c [0]PETSC ERROR: #20 SNESSolve() line 4397 in /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/interface/snes.c [0]PETSC ERROR: #21 main() line 705 in /home/u16/alexprescott/petsc_example/flowroute/petsc_flowroute.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -ksp_converged_reason [0]PETSC ERROR: -ksp_monitor [0]PETSC ERROR: -pc_type mg [0]PETSC ERROR: -snes_converged_reason [0]PETSC ERROR: -snes_monitor [0]PETSC ERROR: -snes_view [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 Printed matrix for a 25 x 25 example 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 Best, Alexander -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 2 23:06:51 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 3 Apr 2020 00:06:51 -0400 Subject: [petsc-users] Zero diagonal Error Code with all non-zero diagonal entries In-Reply-To: References: Message-ID: On Thu, Apr 2, 2020 at 11:33 PM Alexander B Prescott < alexprescott at email.arizona.edu> wrote: > Hello, > > I am teaching myself how to use Petsc for nonlinear equations and I've run > into a problem that I can't quite figure out. I am trying to use the matrix > coloring routines for the finite difference Jacobian approximation, and > I've followed the steps in the manual to do this. > When I run the program with a MG preconditioner, I get back the error: > > [0]PETSC ERROR: --------------------- Error Message > ---------------------------- > [0]PETSC ERROR: Arguments are incompatible > [0]PETSC ERROR: Zero diagonal on row 0 > ..... > > The easiest thing to try is just comment out all your matrix stuff including SNESSetJacobian(). PETSc will do the coloring automatically. If this works, you know its your coloring. If not, then its something with the MG setup. Thanks, Matt > What's interesting is that after I've added non-zero entries to the matrix > with MatrixSetValues() and assembled the matrix with MatAssemblyBegin() + > MatAssemblyEnd(), I can verify that every diagonal entry is non-zero with a > call to MatGetValues. I've included a relevant code snippet below and I'm > happy to send more. Any guidance is greatly appreciated. > > Command line: > > mpirun -n 1 ./program -snes_view -snes_converged_reason -snes_monitor > -ksp_monitor -ksp_converged_reason -pc_type mg > > Code snippet: > > ierr = FormJacobianColoring(snes,J);CHKERRQ(ierr); // this function set's > matrix values and assembles the matrix > > // removed the code, but this is where I've used MatGetValues() to ensure > that the diagonal of J (as well as other entries) has been set to 1.0 > > ierr = MatColoringCreate(J,&coloring);CHKERRQ(ierr); > ierr = MatColoringSetType(coloring,MATCOLORINGSL);CHKERRQ(ierr); > ierr = MatColoringSetFromOptions(coloring);CHKERRQ(ierr); > ierr = MatColoringApply(coloring,&iscoloring);CHKERRQ(ierr); > ierr = MatColoringDestroy(&coloring);CHKERRQ(ierr); > /* > Create the data structure that SNESComputeJacobianDefaultColor() uses to > compute the actual Jacobians via finite differences. > */ > ierr = MatFDColoringCreate(J,iscoloring,&fdcoloring);CHKERRQ(ierr); > ierr = MatFDColoringSetFunction(fdcoloring,(PetscErrorCode > (*)(void))FormFunction,NULL);CHKERRQ(ierr); > ierr = MatFDColoringSetFromOptions(fdcoloring);CHKERRQ(ierr); > ierr = MatFDColoringSetUp(J,iscoloring,fdcoloring);CHKERRQ(ierr); > ierr = > SNESSetJacobian(snes,J,J,SNESComputeJacobianDefaultColor,fdcoloring);CHKERRQ(ierr); > > ierr = SNESSetFromOptions(snes);CHKERRQ(ierr); > > ierr = VecSet(x,0.001);CHKERRQ(ierr); > ierr = SNESSolve(snes,NULL,x);CHKERRQ(ierr); > > > > And here's the full error code: > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Arguments are incompatible > [0]PETSC ERROR: Zero diagonal on row 0 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 > [0]PETSC ERROR: ./petsc_flowroute on a named i0n22 by alexprescott Thu > Apr 2 20:14:01 2020 > [0]PETSC ERROR: Configure options --prefix=/cm/shared/uaapps/petsc/3.10.3 > --download-fblaslapack --download-metis --download-parmetis > --download-hypre PETSC_ARCH=linux-gnu --with-debugging=no COPTFLAGS=-O3 > CXXOPTFLAGS=-O3 > [0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1662 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1693 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: #3 MatSOR() line 3932 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/interface/matrix.c > [0]PETSC ERROR: #4 PCApply_SOR() line 31 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/sor/sor.c > [0]PETSC ERROR: #5 PCApply() line 462 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #6 KSP_PCApply() line 281 in > /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h > [0]PETSC ERROR: #7 KSPInitialResidual() line 67 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: #8 KSPSolve_GMRES() line 233 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: #9 KSPSolve() line 780 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 367 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/cheby/cheby.c > [0]PETSC ERROR: #11 KSPSolve() line 780 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #12 PCMGMCycle_Private() line 20 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #13 PCApply_MG() line 377 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #14 PCApply() line 462 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #15 KSP_PCApply() line 281 in > /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h > [0]PETSC ERROR: #16 KSPInitialResidual() line 67 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: #17 KSPSolve_GMRES() line 233 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: #18 KSPSolve() line 780 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #19 SNESSolve_NEWTONLS() line 224 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #20 SNESSolve() line 4397 in > /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/interface/snes.c > [0]PETSC ERROR: #21 main() line 705 in > /home/u16/alexprescott/petsc_example/flowroute/petsc_flowroute.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -ksp_converged_reason > [0]PETSC ERROR: -ksp_monitor > [0]PETSC ERROR: -pc_type mg > [0]PETSC ERROR: -snes_converged_reason > [0]PETSC ERROR: -snes_monitor > [0]PETSC ERROR: -snes_view > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 > > > Printed matrix for a 25 x 25 example > > 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 > 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 > 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 > 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 > 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 > 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 > 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 > 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 > 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 > 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 > > > > Best, > Alexander > > > > -- > Alexander Prescott > alexprescott at email.arizona.edu > PhD Candidate, The University of Arizona > Department of Geosciences > 1040 E. 4th Street > Tucson, AZ, 85721 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Fri Apr 3 10:25:14 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Fri, 3 Apr 2020 09:25:14 -0600 Subject: [petsc-users] How to set an initial guess for TS Message-ID: Hi All, TSSetSolution will set an initial condition for the current TSSolve(). What should I do if I want to set an initial guess for the current solution that is different from the initial condition? The initial guess is supposed to be really close to the current solution, and then will accelerate my solver. In other words, TSSetSolution will set "U_{n-1}", and now we call TSSolve to figure out "U_{n}". If I know something about "U_{n}", and I want to set "\bar{U}_{n}" as the initial guess of "U_{n}" when computing "U_{n}". Thanks, Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexprescott at email.arizona.edu Fri Apr 3 11:20:14 2020 From: alexprescott at email.arizona.edu (Alexander B Prescott) Date: Fri, 3 Apr 2020 09:20:14 -0700 Subject: [petsc-users] [EXT]Re: Zero diagonal Error Code with all non-zero diagonal entries In-Reply-To: References: Message-ID: Hi Matt, Thanks for your help. I've done as you suggested and it still doesn't work -- I get the same error message as before if I add the -snes_fd flag to the command line. I also added -info and included that output below. Also, a similar error is returned with other PC types, e.g LU. [0] PetscInitialize(): PETSc successfully started: number of processors = 1 [0] PetscGetHostName(): Rejecting domainname, likely is NIS login2.(none) [0] PetscInitialize(): Running on machine: login2 [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374784 max tags = 268435455 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 [0] DMGetDMSNES(): Creating new DMSNES [0] PetscGetHostName(): Rejecting domainname, likely is NIS login2.(none) [0] SNESSetFromOptions(): Setting default finite difference Jacobian matrix [0] DMCreateMatrix_Shell(): Naively creating matrix using global vector distribution without preallocation [0] MatSetUp(): Warning not preallocating matrix storage [0] DMGetDMKSP(): Creating new DMKSP 0 SNES Function norm 1.000000000000e+01 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 25 X 25; storage space: 70 unneeded,55 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3 [0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 25) < 0.6. Do not use CompressedRow routines. [0] MatSeqAIJCheckInode(): Found 25 nodes out of 25 rows. Not using Inode routines [0] SNESComputeJacobian(): Rebuilding preconditioner [0] PCSetUp(): Setting up PC for first time [0] PCSetUp_MG(): Using outer operators to define finest grid operator because PCMGGetSmoother(pc,nlevels-1,&ksp);KSPSetOperators(ksp,...); was not called. [0] PCSetUp(): Setting up PC for first time [0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged [0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged [0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Arguments are incompatible [0]PETSC ERROR: Zero diagonal on row 0 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. .... Best, Alexander On Thu, Apr 2, 2020 at 9:07 PM Matthew Knepley wrote: > *External Email* > On Thu, Apr 2, 2020 at 11:33 PM Alexander B Prescott < > alexprescott at email.arizona.edu> wrote: > >> Hello, >> >> I am teaching myself how to use Petsc for nonlinear equations and I've >> run into a problem that I can't quite figure out. I am trying to use the >> matrix coloring routines for the finite difference Jacobian approximation, >> and I've followed the steps in the manual to do this. >> When I run the program with a MG preconditioner, I get back the error: >> >> [0]PETSC ERROR: --------------------- Error Message >> ---------------------------- >> [0]PETSC ERROR: Arguments are incompatible >> [0]PETSC ERROR: Zero diagonal on row 0 >> ..... >> >> > The easiest thing to try is just comment out all your matrix stuff > including SNESSetJacobian(). PETSc will do the coloring automatically. > If this works, you know its your coloring. If not, then its something with > the MG setup. > > Thanks, > > Matt > > >> What's interesting is that after I've added non-zero entries to the >> matrix with MatrixSetValues() and assembled the matrix >> with MatAssemblyBegin() + MatAssemblyEnd(), I can verify that every >> diagonal entry is non-zero with a call to MatGetValues. I've included a >> relevant code snippet below and I'm happy to send more. Any guidance is >> greatly appreciated. >> >> Command line: >> >> mpirun -n 1 ./program -snes_view -snes_converged_reason -snes_monitor >> -ksp_monitor -ksp_converged_reason -pc_type mg >> >> Code snippet: >> >> ierr = FormJacobianColoring(snes,J);CHKERRQ(ierr); // this function set's >> matrix values and assembles the matrix >> >> // removed the code, but this is where I've used MatGetValues() to ensure >> that the diagonal of J (as well as other entries) has been set to 1.0 >> >> ierr = MatColoringCreate(J,&coloring);CHKERRQ(ierr); >> ierr = MatColoringSetType(coloring,MATCOLORINGSL);CHKERRQ(ierr); >> ierr = MatColoringSetFromOptions(coloring);CHKERRQ(ierr); >> ierr = MatColoringApply(coloring,&iscoloring);CHKERRQ(ierr); >> ierr = MatColoringDestroy(&coloring);CHKERRQ(ierr); >> /* >> Create the data structure that SNESComputeJacobianDefaultColor() uses to >> compute the actual Jacobians via finite differences. >> */ >> ierr = MatFDColoringCreate(J,iscoloring,&fdcoloring);CHKERRQ(ierr); >> ierr = MatFDColoringSetFunction(fdcoloring,(PetscErrorCode >> (*)(void))FormFunction,NULL);CHKERRQ(ierr); >> ierr = MatFDColoringSetFromOptions(fdcoloring);CHKERRQ(ierr); >> ierr = MatFDColoringSetUp(J,iscoloring,fdcoloring);CHKERRQ(ierr); >> ierr = >> SNESSetJacobian(snes,J,J,SNESComputeJacobianDefaultColor,fdcoloring);CHKERRQ(ierr); >> >> ierr = SNESSetFromOptions(snes);CHKERRQ(ierr); >> >> ierr = VecSet(x,0.001);CHKERRQ(ierr); >> ierr = SNESSolve(snes,NULL,x);CHKERRQ(ierr); >> >> >> >> And here's the full error code: >> >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Arguments are incompatible >> [0]PETSC ERROR: Zero diagonal on row 0 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 >> [0]PETSC ERROR: ./petsc_flowroute on a named i0n22 by alexprescott Thu >> Apr 2 20:14:01 2020 >> [0]PETSC ERROR: Configure options --prefix=/cm/shared/uaapps/petsc/3.10.3 >> --download-fblaslapack --download-metis --download-parmetis >> --download-hypre PETSC_ARCH=linux-gnu --with-debugging=no COPTFLAGS=-O3 >> CXXOPTFLAGS=-O3 >> [0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1662 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c >> [0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1693 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c >> [0]PETSC ERROR: #3 MatSOR() line 3932 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/interface/matrix.c >> [0]PETSC ERROR: #4 PCApply_SOR() line 31 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/sor/sor.c >> [0]PETSC ERROR: #5 PCApply() line 462 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: #6 KSP_PCApply() line 281 in >> /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h >> [0]PETSC ERROR: #7 KSPInitialResidual() line 67 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c >> [0]PETSC ERROR: #8 KSPSolve_GMRES() line 233 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c >> [0]PETSC ERROR: #9 KSPSolve() line 780 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 367 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/cheby/cheby.c >> [0]PETSC ERROR: #11 KSPSolve() line 780 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: #12 PCMGMCycle_Private() line 20 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: #13 PCApply_MG() line 377 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: #14 PCApply() line 462 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: #15 KSP_PCApply() line 281 in >> /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h >> [0]PETSC ERROR: #16 KSPInitialResidual() line 67 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c >> [0]PETSC ERROR: #17 KSPSolve_GMRES() line 233 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c >> [0]PETSC ERROR: #18 KSPSolve() line 780 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: #19 SNESSolve_NEWTONLS() line 224 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/impls/ls/ls.c >> [0]PETSC ERROR: #20 SNESSolve() line 4397 in >> /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/interface/snes.c >> [0]PETSC ERROR: #21 main() line 705 in >> /home/u16/alexprescott/petsc_example/flowroute/petsc_flowroute.c >> [0]PETSC ERROR: PETSc Option Table entries: >> [0]PETSC ERROR: -ksp_converged_reason >> [0]PETSC ERROR: -ksp_monitor >> [0]PETSC ERROR: -pc_type mg >> [0]PETSC ERROR: -snes_converged_reason >> [0]PETSC ERROR: -snes_monitor >> [0]PETSC ERROR: -snes_view >> [0]PETSC ERROR: ----------------End of Error Message -------send entire >> error message to petsc-maint at mcs.anl.gov---------- >> application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 >> >> >> Printed matrix for a 25 x 25 example >> >> 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 >> 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 >> 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 >> 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 >> 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 >> 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 >> 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 >> 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 >> 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 >> 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 >> 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 >> 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 >> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 >> >> >> >> Best, >> Alexander >> >> >> >> -- >> Alexander Prescott >> alexprescott at email.arizona.edu >> PhD Candidate, The University of Arizona >> Department of Geosciences >> 1040 E. 4th Street >> Tucson, AZ, 85721 >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Fri Apr 3 11:21:32 2020 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Fri, 3 Apr 2020 19:21:32 +0300 Subject: [petsc-users] How to set an initial guess for TS In-Reply-To: References: Message-ID: On Fri, 3 Apr 2020 at 18:26, Fande Kong wrote: > Hi All, > > TSSetSolution will set an initial condition for the current TSSolve(). > What should I do if I want to set an initial guess for the current solution > that is different from the initial condition? The initial guess is > supposed to be really close to the current solution, and then will > accelerate my solver. > > In other words, TSSetSolution will set "U_{n-1}", and now we call TSSolve > to figure out "U_{n}". If I know something about "U_{n}", and I want to set > "\bar{U}_{n}" as the initial guess of "U_{n}" when computing "U_{n}". > > IMHO, the only reliable solution would be to implement SNESSet{Pre|Post}Solve(snes, {Pre|Post}Solve, ctx) to set callback routines {Pre|Post}Solve(snes,b,x,ctx) allowing users to modify the solution vector 'x' in place (but no 'b', of course). We already have that feature in KSP, why not SNES which is an inner loop in TS ? -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Apr 3 11:33:33 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 3 Apr 2020 12:33:33 -0400 Subject: [petsc-users] false negatives ontests Message-ID: I run this test and it passes. But I know it's wrong so I* change a 6 to a 7 in output/ex11_0.out *and I get this diff AND all the rest. Any idea what is going on? Thanks, 12:25 mark/feature-xgc-interface-rebase *> ~/Codes/petsc-master$ make -f ./gmakefile test globsearch="dm_impls_plex_tutorials-ex11_0" PETSC_DIR=$PWD Using MAKEFLAGS: PETSC_DIR=/Users/markadams/Codes/petsc-master globsearch=dm_impls_plex_tutorials-ex11_0 TEST arch-macosx-gnu-g/tests/counts/dm_impls_plex_tutorials-ex11_0.counts ok dm_impls_plex_tutorials-ex11_0 not ok diff-dm_impls_plex_tutorials-ex11_0 # Error code: 1 # 55c55 # < ***** FormRHSSource: have new_imp_rate= 4.975e+01 dt=0.125 stepi=1 time= 1.000e-01 # --- # > ***** FormRHSSource: have new_imp_rate= 4.975e-04 dt=0.125 stepi=1 time= 1.000e-01 # 64,76c64,76 # < 0 SNES Function norm 3.285336127013e-04 # < 1 SNES Function norm 9.604648941118e-07 # < 2 SNES Function norm 1.964241915657e-08 # < 3 SNES Function norm 8.824980789615e-10 # < 4 SNES Function norm 4.556683618010e-11 # < 5 SNES Function norm 2.470586731358e-12 # < 6 SNES Function norm 1.385600544214e-13 *# < Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 7*# < ***** FormRHSSource: have new_imp_rate= 1.981e+03 dt=0.125 stepi=1 time= 1.625e-01 # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 0.466012, accepting step of size 0.125 # < 2) species-0: charge density= -1.6018377279742e+01 z-momentum= -2.8181973830519e-19 energy= 1.2008542897309e+05 # < 2) species-1: charge density= 1.6024885661178e+01 z-momentum= -9.0961762196061e-18 energy= 1.2013832642627e+05 # < 2) Total: charge density= 6.5083814358857e-03, momentum= -9.3779959579113e-18, energy= 2.4022375539935e+05 (m_i[0]/m_e = 3670.94, 80 cells) # --- # > 0 SNES Function norm 3.285336109721e-04 # > 1 SNES Function norm 9.602918060455e-07 # > 2 SNES Function norm 1.961138067634e-08 # > 3 SNES Function norm 8.793960138447e-10 # > 4 SNES Function norm 4.534301360955e-11 # > 5 SNES Function norm 2.455937690832e-12 # > 6 SNES Function norm 1.404599098408e-13 *# > Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 6*# > ***** FormRHSSource: have new_imp_rate= 5.192e-04 dt=0.125 stepi=1 time= 1.625e-01 # > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 0.674842, accepting step of size 0.125 # > 2) species-0: charge density= -1.6018371940813e+01 z-momentum= -6.0282835882130e-19 energy= 1.2008542888021e+05 # > 2) species-1: charge density= 1.6024885680151e+01 z-momentum= -9.8338166614696e-18 energy= 1.2013832641421e+05 # > 2) Total: charge density= 6.5137393379366e-03, momentum= -1.0436645020291e-17, energy= 2.4022375529442e+05 (m_i[0]/m_e = 3670.94, 80 cells) # 78c78 # < 2 TS dt 0.15625 time 0.225 # --- # > 2 TS dt 0.136947 time 0.225 # ------------- # Summary # ------------- # FAILED diff-dm_impls_plex_tutorials-ex11_0 # success 1/2 tests (50.0%) # failed 1/2 tests (50.0%) # todo 0/2 tests (0.0%) # skip 0/2 tests (0.0%) # # Wall clock time for tests: 48 sec # Approximate CPU time (not incl. build time): 47.74 sec # # To rerun failed tests: # /usr/bin/make -f gmakefile test test-fail=1 # # Timing summary (actual test time / total CPU time): -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 3 12:39:18 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 3 Apr 2020 13:39:18 -0400 Subject: [petsc-users] [EXT]Re: Zero diagonal Error Code with all non-zero diagonal entries In-Reply-To: References: Message-ID: On Fri, Apr 3, 2020 at 12:20 PM Alexander B Prescott < alexprescott at email.arizona.edu> wrote: > Hi Matt, > > Thanks for your help. I've done as you suggested and it still doesn't work > -- I get the same error message as before if I add the -snes_fd flag to the > command line. I also added -info and included that output below. Also, a > similar error is returned with other PC types, e.g LU. > Great. I now suspect your residual is flawed. I have several suggestions: a) It looks like you might have an indexing bug, since your residual does not depend on dof 0. PETSc uses 0-based indexing. b) I think the best initial check is to put in the exact solution and check that the residual is 0. I do this in SNES ex5 using MMS. c) -snes_fd forms the Jacobian without coloring. Giving no options uses coloring. Thanks, Matt > [0] PetscInitialize(): PETSc successfully started: number of processors = 1 > [0] PetscGetHostName(): Rejecting domainname, likely is NIS login2.(none) > [0] PetscInitialize(): Running on machine: login2 > [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 > -2080374784 max tags = 268435455 > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 > -2080374784 > [0] DMGetDMSNES(): Creating new DMSNES > [0] PetscGetHostName(): Rejecting domainname, likely is NIS login2.(none) > [0] SNESSetFromOptions(): Setting default finite difference Jacobian matrix > [0] DMCreateMatrix_Shell(): Naively creating matrix using global vector > distribution without preallocation > [0] MatSetUp(): Warning not preallocating matrix storage > [0] DMGetDMKSP(): Creating new DMKSP > 0 SNES Function norm 1.000000000000e+01 > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 25 X 25; storage space: 70 > unneeded,55 used > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3 > [0] MatCheckCompressedRow(): Found the ratio (num_zerorows > 0)/(num_localrows 25) < 0.6. Do not use CompressedRow routines. > [0] MatSeqAIJCheckInode(): Found 25 nodes out of 25 rows. Not using Inode > routines > [0] SNESComputeJacobian(): Rebuilding preconditioner > [0] PCSetUp(): Setting up PC for first time > [0] PCSetUp_MG(): Using outer operators to define finest grid operator > because PCMGGetSmoother(pc,nlevels-1,&ksp);KSPSetOperators(ksp,...); was > not called. > [0] PCSetUp(): Setting up PC for first time > [0] PCSetUp(): Leaving PC with identical preconditioner since operator is > unchanged > [0] PCSetUp(): Leaving PC with identical preconditioner since operator is > unchanged > [0] PCSetUp(): Leaving PC with identical preconditioner since operator is > unchanged > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Arguments are incompatible > [0]PETSC ERROR: Zero diagonal on row 0 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > .... > > > Best, > Alexander > > > > On Thu, Apr 2, 2020 at 9:07 PM Matthew Knepley wrote: > >> *External Email* >> On Thu, Apr 2, 2020 at 11:33 PM Alexander B Prescott < >> alexprescott at email.arizona.edu> wrote: >> >>> Hello, >>> >>> I am teaching myself how to use Petsc for nonlinear equations and I've >>> run into a problem that I can't quite figure out. I am trying to use the >>> matrix coloring routines for the finite difference Jacobian approximation, >>> and I've followed the steps in the manual to do this. >>> When I run the program with a MG preconditioner, I get back the error: >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> ---------------------------- >>> [0]PETSC ERROR: Arguments are incompatible >>> [0]PETSC ERROR: Zero diagonal on row 0 >>> ..... >>> >>> >> The easiest thing to try is just comment out all your matrix stuff >> including SNESSetJacobian(). PETSc will do the coloring automatically. >> If this works, you know its your coloring. If not, then its something >> with the MG setup. >> >> Thanks, >> >> Matt >> >> >>> What's interesting is that after I've added non-zero entries to the >>> matrix with MatrixSetValues() and assembled the matrix >>> with MatAssemblyBegin() + MatAssemblyEnd(), I can verify that every >>> diagonal entry is non-zero with a call to MatGetValues. I've included a >>> relevant code snippet below and I'm happy to send more. Any guidance is >>> greatly appreciated. >>> >>> Command line: >>> >>> mpirun -n 1 ./program -snes_view -snes_converged_reason -snes_monitor >>> -ksp_monitor -ksp_converged_reason -pc_type mg >>> >>> Code snippet: >>> >>> ierr = FormJacobianColoring(snes,J);CHKERRQ(ierr); // this function >>> set's matrix values and assembles the matrix >>> >>> // removed the code, but this is where I've used MatGetValues() to >>> ensure that the diagonal of J (as well as other entries) has been set to 1.0 >>> >>> ierr = MatColoringCreate(J,&coloring);CHKERRQ(ierr); >>> ierr = MatColoringSetType(coloring,MATCOLORINGSL);CHKERRQ(ierr); >>> ierr = MatColoringSetFromOptions(coloring);CHKERRQ(ierr); >>> ierr = MatColoringApply(coloring,&iscoloring);CHKERRQ(ierr); >>> ierr = MatColoringDestroy(&coloring);CHKERRQ(ierr); >>> /* >>> Create the data structure that SNESComputeJacobianDefaultColor() uses to >>> compute the actual Jacobians via finite differences. >>> */ >>> ierr = MatFDColoringCreate(J,iscoloring,&fdcoloring);CHKERRQ(ierr); >>> ierr = MatFDColoringSetFunction(fdcoloring,(PetscErrorCode >>> (*)(void))FormFunction,NULL);CHKERRQ(ierr); >>> ierr = MatFDColoringSetFromOptions(fdcoloring);CHKERRQ(ierr); >>> ierr = MatFDColoringSetUp(J,iscoloring,fdcoloring);CHKERRQ(ierr); >>> ierr = >>> SNESSetJacobian(snes,J,J,SNESComputeJacobianDefaultColor,fdcoloring);CHKERRQ(ierr); >>> >>> ierr = SNESSetFromOptions(snes);CHKERRQ(ierr); >>> >>> ierr = VecSet(x,0.001);CHKERRQ(ierr); >>> ierr = SNESSolve(snes,NULL,x);CHKERRQ(ierr); >>> >>> >>> >>> And here's the full error code: >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Arguments are incompatible >>> [0]PETSC ERROR: Zero diagonal on row 0 >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> for trouble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 >>> [0]PETSC ERROR: ./petsc_flowroute on a named i0n22 by alexprescott Thu >>> Apr 2 20:14:01 2020 >>> [0]PETSC ERROR: Configure options >>> --prefix=/cm/shared/uaapps/petsc/3.10.3 --download-fblaslapack >>> --download-metis --download-parmetis --download-hypre PETSC_ARCH=linux-gnu >>> --with-debugging=no COPTFLAGS=-O3 CXXOPTFLAGS=-O3 >>> [0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1662 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c >>> [0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1693 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c >>> [0]PETSC ERROR: #3 MatSOR() line 3932 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/interface/matrix.c >>> [0]PETSC ERROR: #4 PCApply_SOR() line 31 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/sor/sor.c >>> [0]PETSC ERROR: #5 PCApply() line 462 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: #6 KSP_PCApply() line 281 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h >>> [0]PETSC ERROR: #7 KSPInitialResidual() line 67 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c >>> [0]PETSC ERROR: #8 KSPSolve_GMRES() line 233 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c >>> [0]PETSC ERROR: #9 KSPSolve() line 780 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 367 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/cheby/cheby.c >>> [0]PETSC ERROR: #11 KSPSolve() line 780 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #12 PCMGMCycle_Private() line 20 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c >>> [0]PETSC ERROR: #13 PCApply_MG() line 377 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c >>> [0]PETSC ERROR: #14 PCApply() line 462 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: #15 KSP_PCApply() line 281 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h >>> [0]PETSC ERROR: #16 KSPInitialResidual() line 67 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c >>> [0]PETSC ERROR: #17 KSPSolve_GMRES() line 233 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c >>> [0]PETSC ERROR: #18 KSPSolve() line 780 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #19 SNESSolve_NEWTONLS() line 224 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: #20 SNESSolve() line 4397 in >>> /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/interface/snes.c >>> [0]PETSC ERROR: #21 main() line 705 in >>> /home/u16/alexprescott/petsc_example/flowroute/petsc_flowroute.c >>> [0]PETSC ERROR: PETSc Option Table entries: >>> [0]PETSC ERROR: -ksp_converged_reason >>> [0]PETSC ERROR: -ksp_monitor >>> [0]PETSC ERROR: -pc_type mg >>> [0]PETSC ERROR: -snes_converged_reason >>> [0]PETSC ERROR: -snes_monitor >>> [0]PETSC ERROR: -snes_view >>> [0]PETSC ERROR: ----------------End of Error Message -------send entire >>> error message to petsc-maint at mcs.anl.gov---------- >>> application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 >>> >>> >>> Printed matrix for a 25 x 25 example >>> >>> 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 >>> 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 >>> 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 >>> 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 >>> 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 >>> 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 >>> 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 >>> 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 >>> 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 >>> 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 >>> 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 >>> 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 >>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 >>> >>> >>> >>> Best, >>> Alexander >>> >>> >>> >>> -- >>> Alexander Prescott >>> alexprescott at email.arizona.edu >>> PhD Candidate, The University of Arizona >>> Department of Geosciences >>> 1040 E. 4th Street >>> Tucson, AZ, 85721 >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > > -- > Alexander Prescott > alexprescott at email.arizona.edu > PhD Candidate, The University of Arizona > Department of Geosciences > 1040 E. 4th Street > Tucson, AZ, 85721 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 3 12:41:41 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 3 Apr 2020 13:41:41 -0400 Subject: [petsc-users] false negatives ontests In-Reply-To: References: Message-ID: On Fri, Apr 3, 2020 at 12:34 PM Mark Adams wrote: > I run this test and it passes. But I know it's wrong so I* change a 6 to > a 7 in output/ex11_0.out *and I get this diff AND all the rest. > Any idea what is going on? > Yes. We do not compare any real number, only integers. Once you change an integer, you also get the diff of all the real numbers. This is, of course, not perfect, but we did not have the manpower to overhaul all tests to account for wibble among machines. Matt > Thanks, > > 12:25 mark/feature-xgc-interface-rebase *> ~/Codes/petsc-master$ make -f > ./gmakefile test globsearch="dm_impls_plex_tutorials-ex11_0" PETSC_DIR=$PWD > Using MAKEFLAGS: PETSC_DIR=/Users/markadams/Codes/petsc-master > globsearch=dm_impls_plex_tutorials-ex11_0 > TEST > arch-macosx-gnu-g/tests/counts/dm_impls_plex_tutorials-ex11_0.counts > ok dm_impls_plex_tutorials-ex11_0 > not ok diff-dm_impls_plex_tutorials-ex11_0 # Error code: 1 > # 55c55 > # < ***** FormRHSSource: have new_imp_rate= 4.975e+01 > dt=0.125 stepi=1 time= 1.000e-01 > # --- > # > ***** FormRHSSource: have new_imp_rate= 4.975e-04 > dt=0.125 stepi=1 time= 1.000e-01 > # 64,76c64,76 > # < 0 SNES Function norm 3.285336127013e-04 > # < 1 SNES Function norm 9.604648941118e-07 > # < 2 SNES Function norm 1.964241915657e-08 > # < 3 SNES Function norm 8.824980789615e-10 > # < 4 SNES Function norm 4.556683618010e-11 > # < 5 SNES Function norm 2.470586731358e-12 > # < 6 SNES Function norm 1.385600544214e-13 > > *# < Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE > iterations 7*# < ***** FormRHSSource: have > new_imp_rate= 1.981e+03 dt=0.125 stepi=1 time= 1.625e-01 > # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation > error 0.466012, accepting step of size 0.125 > # < 2) species-0: charge density= -1.6018377279742e+01 z-momentum= > -2.8181973830519e-19 energy= 1.2008542897309e+05 > # < 2) species-1: charge density= 1.6024885661178e+01 z-momentum= > -9.0961762196061e-18 energy= 1.2013832642627e+05 > # < 2) Total: charge density= 6.5083814358857e-03, > momentum= -9.3779959579113e-18, energy= 2.4022375539935e+05 (m_i[0]/m_e = > 3670.94, 80 cells) > # --- > # > 0 SNES Function norm 3.285336109721e-04 > # > 1 SNES Function norm 9.602918060455e-07 > # > 2 SNES Function norm 1.961138067634e-08 > # > 3 SNES Function norm 8.793960138447e-10 > # > 4 SNES Function norm 4.534301360955e-11 > # > 5 SNES Function norm 2.455937690832e-12 > # > 6 SNES Function norm 1.404599098408e-13 > > *# > Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE > iterations 6*# > ***** FormRHSSource: have > new_imp_rate= 5.192e-04 dt=0.125 stepi=1 time= 1.625e-01 > # > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation > error 0.674842, accepting step of size 0.125 > # > 2) species-0: charge density= -1.6018371940813e+01 z-momentum= > -6.0282835882130e-19 energy= 1.2008542888021e+05 > # > 2) species-1: charge density= 1.6024885680151e+01 z-momentum= > -9.8338166614696e-18 energy= 1.2013832641421e+05 > # > 2) Total: charge density= 6.5137393379366e-03, > momentum= -1.0436645020291e-17, energy= 2.4022375529442e+05 (m_i[0]/m_e = > 3670.94, 80 cells) > # 78c78 > # < 2 TS dt 0.15625 time 0.225 > # --- > # > 2 TS dt 0.136947 time 0.225 > > # ------------- > # Summary > # ------------- > # FAILED diff-dm_impls_plex_tutorials-ex11_0 > # success 1/2 tests (50.0%) > # failed 1/2 tests (50.0%) > # todo 0/2 tests (0.0%) > # skip 0/2 tests (0.0%) > # > # Wall clock time for tests: 48 sec > # Approximate CPU time (not incl. build time): 47.74 sec > # > # To rerun failed tests: > # /usr/bin/make -f gmakefile test test-fail=1 > # > # Timing summary (actual test time / total CPU time): > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Apr 3 13:54:17 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 3 Apr 2020 14:54:17 -0400 Subject: [petsc-users] false negatives ontests In-Reply-To: References: Message-ID: Fair enough, thanks, On Fri, Apr 3, 2020 at 1:48 PM Matthew Knepley wrote: > On Fri, Apr 3, 2020 at 12:34 PM Mark Adams wrote: > >> I run this test and it passes. But I know it's wrong so I* change a 6 to >> a 7 in output/ex11_0.out *and I get this diff AND all the rest. >> Any idea what is going on? >> > > Yes. We do not compare any real number, only integers. Once you change an > integer, you also get the diff > of all the real numbers. This is, of course, not perfect, but we did not > have the manpower to overhaul all tests > to account for wibble among machines. > > Matt > > >> Thanks, >> >> 12:25 mark/feature-xgc-interface-rebase *> ~/Codes/petsc-master$ make -f >> ./gmakefile test globsearch="dm_impls_plex_tutorials-ex11_0" PETSC_DIR=$PWD >> Using MAKEFLAGS: PETSC_DIR=/Users/markadams/Codes/petsc-master >> globsearch=dm_impls_plex_tutorials-ex11_0 >> TEST >> arch-macosx-gnu-g/tests/counts/dm_impls_plex_tutorials-ex11_0.counts >> ok dm_impls_plex_tutorials-ex11_0 >> not ok diff-dm_impls_plex_tutorials-ex11_0 # Error code: 1 >> # 55c55 >> # < ***** FormRHSSource: have new_imp_rate= >> 4.975e+01 dt=0.125 stepi=1 time= 1.000e-01 >> # --- >> # > ***** FormRHSSource: have new_imp_rate= >> 4.975e-04 dt=0.125 stepi=1 time= 1.000e-01 >> # 64,76c64,76 >> # < 0 SNES Function norm 3.285336127013e-04 >> # < 1 SNES Function norm 9.604648941118e-07 >> # < 2 SNES Function norm 1.964241915657e-08 >> # < 3 SNES Function norm 8.824980789615e-10 >> # < 4 SNES Function norm 4.556683618010e-11 >> # < 5 SNES Function norm 2.470586731358e-12 >> # < 6 SNES Function norm 1.385600544214e-13 >> >> *# < Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE >> iterations 7*# < ***** FormRHSSource: have >> new_imp_rate= 1.981e+03 dt=0.125 stepi=1 time= 1.625e-01 >> # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation >> error 0.466012, accepting step of size 0.125 >> # < 2) species-0: charge density= -1.6018377279742e+01 >> z-momentum= -2.8181973830519e-19 energy= 1.2008542897309e+05 >> # < 2) species-1: charge density= 1.6024885661178e+01 >> z-momentum= -9.0961762196061e-18 energy= 1.2013832642627e+05 >> # < 2) Total: charge density= 6.5083814358857e-03, >> momentum= -9.3779959579113e-18, energy= 2.4022375539935e+05 (m_i[0]/m_e = >> 3670.94, 80 cells) >> # --- >> # > 0 SNES Function norm 3.285336109721e-04 >> # > 1 SNES Function norm 9.602918060455e-07 >> # > 2 SNES Function norm 1.961138067634e-08 >> # > 3 SNES Function norm 8.793960138447e-10 >> # > 4 SNES Function norm 4.534301360955e-11 >> # > 5 SNES Function norm 2.455937690832e-12 >> # > 6 SNES Function norm 1.404599098408e-13 >> >> *# > Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE >> iterations 6*# > ***** FormRHSSource: have >> new_imp_rate= 5.192e-04 dt=0.125 stepi=1 time= 1.625e-01 >> # > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation >> error 0.674842, accepting step of size 0.125 >> # > 2) species-0: charge density= -1.6018371940813e+01 >> z-momentum= -6.0282835882130e-19 energy= 1.2008542888021e+05 >> # > 2) species-1: charge density= 1.6024885680151e+01 >> z-momentum= -9.8338166614696e-18 energy= 1.2013832641421e+05 >> # > 2) Total: charge density= 6.5137393379366e-03, >> momentum= -1.0436645020291e-17, energy= 2.4022375529442e+05 (m_i[0]/m_e = >> 3670.94, 80 cells) >> # 78c78 >> # < 2 TS dt 0.15625 time 0.225 >> # --- >> # > 2 TS dt 0.136947 time 0.225 >> >> # ------------- >> # Summary >> # ------------- >> # FAILED diff-dm_impls_plex_tutorials-ex11_0 >> # success 1/2 tests (50.0%) >> # failed 1/2 tests (50.0%) >> # todo 0/2 tests (0.0%) >> # skip 0/2 tests (0.0%) >> # >> # Wall clock time for tests: 48 sec >> # Approximate CPU time (not incl. build time): 47.74 sec >> # >> # To rerun failed tests: >> # /usr/bin/make -f gmakefile test test-fail=1 >> # >> # Timing summary (actual test time / total CPU time): >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Apr 3 10:28:21 2020 From: jed at jedbrown.org (Jed Brown) Date: Fri, 03 Apr 2020 09:28:21 -0600 Subject: [petsc-users] How to set an initial guess for TS In-Reply-To: References: Message-ID: <87y2rcbnca.fsf@jedbrown.org> This sounds like you're talking about a starting procedure for a DAE (or near-singular ODE)? Fande Kong writes: > Hi All, > > TSSetSolution will set an initial condition for the current TSSolve(). What > should I do if I want to set an initial guess for the current solution that > is different from the initial condition? The initial guess is supposed to > be really close to the current solution, and then will accelerate my solver. > > In other words, TSSetSolution will set "U_{n-1}", and now we call TSSolve > to figure out "U_{n}". If I know something about "U_{n}", and I want to set > "\bar{U}_{n}" as the initial guess of "U_{n}" when computing "U_{n}". > > > Thanks, > > Fande, From jed at jedbrown.org Fri Apr 3 11:36:49 2020 From: jed at jedbrown.org (Jed Brown) Date: Fri, 03 Apr 2020 10:36:49 -0600 Subject: [petsc-users] false negatives ontests In-Reply-To: References: Message-ID: <87pncobk66.fsf@jedbrown.org> Testing basically ignores numeric values (too many false positives), but when a test fails, it shows the whole diff. Mark Adams writes: > I run this test and it passes. But I know it's wrong so I* change a 6 to a > 7 in output/ex11_0.out *and I get this diff AND all the rest. > Any idea what is going on? > Thanks, > > 12:25 mark/feature-xgc-interface-rebase *> ~/Codes/petsc-master$ make -f > ./gmakefile test globsearch="dm_impls_plex_tutorials-ex11_0" PETSC_DIR=$PWD > Using MAKEFLAGS: PETSC_DIR=/Users/markadams/Codes/petsc-master > globsearch=dm_impls_plex_tutorials-ex11_0 > TEST > arch-macosx-gnu-g/tests/counts/dm_impls_plex_tutorials-ex11_0.counts > ok dm_impls_plex_tutorials-ex11_0 > not ok diff-dm_impls_plex_tutorials-ex11_0 # Error code: 1 > # 55c55 > # < ***** FormRHSSource: have new_imp_rate= 4.975e+01 > dt=0.125 stepi=1 time= 1.000e-01 > # --- > # > ***** FormRHSSource: have new_imp_rate= 4.975e-04 > dt=0.125 stepi=1 time= 1.000e-01 > # 64,76c64,76 > # < 0 SNES Function norm 3.285336127013e-04 > # < 1 SNES Function norm 9.604648941118e-07 > # < 2 SNES Function norm 1.964241915657e-08 > # < 3 SNES Function norm 8.824980789615e-10 > # < 4 SNES Function norm 4.556683618010e-11 > # < 5 SNES Function norm 2.470586731358e-12 > # < 6 SNES Function norm 1.385600544214e-13 > > *# < Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE > iterations 7*# < ***** FormRHSSource: have > new_imp_rate= 1.981e+03 dt=0.125 stepi=1 time= 1.625e-01 > # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation > error 0.466012, accepting step of size 0.125 > # < 2) species-0: charge density= -1.6018377279742e+01 z-momentum= > -2.8181973830519e-19 energy= 1.2008542897309e+05 > # < 2) species-1: charge density= 1.6024885661178e+01 z-momentum= > -9.0961762196061e-18 energy= 1.2013832642627e+05 > # < 2) Total: charge density= 6.5083814358857e-03, momentum= > -9.3779959579113e-18, energy= 2.4022375539935e+05 (m_i[0]/m_e = 3670.94, > 80 cells) > # --- > # > 0 SNES Function norm 3.285336109721e-04 > # > 1 SNES Function norm 9.602918060455e-07 > # > 2 SNES Function norm 1.961138067634e-08 > # > 3 SNES Function norm 8.793960138447e-10 > # > 4 SNES Function norm 4.534301360955e-11 > # > 5 SNES Function norm 2.455937690832e-12 > # > 6 SNES Function norm 1.404599098408e-13 > > *# > Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE > iterations 6*# > ***** FormRHSSource: have > new_imp_rate= 5.192e-04 dt=0.125 stepi=1 time= 1.625e-01 > # > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation > error 0.674842, accepting step of size 0.125 > # > 2) species-0: charge density= -1.6018371940813e+01 z-momentum= > -6.0282835882130e-19 energy= 1.2008542888021e+05 > # > 2) species-1: charge density= 1.6024885680151e+01 z-momentum= > -9.8338166614696e-18 energy= 1.2013832641421e+05 > # > 2) Total: charge density= 6.5137393379366e-03, momentum= > -1.0436645020291e-17, energy= 2.4022375529442e+05 (m_i[0]/m_e = 3670.94, > 80 cells) > # 78c78 > # < 2 TS dt 0.15625 time 0.225 > # --- > # > 2 TS dt 0.136947 time 0.225 > > # ------------- > # Summary > # ------------- > # FAILED diff-dm_impls_plex_tutorials-ex11_0 > # success 1/2 tests (50.0%) > # failed 1/2 tests (50.0%) > # todo 0/2 tests (0.0%) > # skip 0/2 tests (0.0%) > # > # Wall clock time for tests: 48 sec > # Approximate CPU time (not incl. build time): 47.74 sec > # > # To rerun failed tests: > # /usr/bin/make -f gmakefile test test-fail=1 > # > # Timing summary (actual test time / total CPU time): From fdkong.jd at gmail.com Fri Apr 3 15:06:13 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Fri, 3 Apr 2020 14:06:13 -0600 Subject: [petsc-users] How to set an initial guess for TS In-Reply-To: <87y2rcbnca.fsf@jedbrown.org> References: <87y2rcbnca.fsf@jedbrown.org> Message-ID: No. I am working on a transient loosely coupled multiphysics simulation. Assume there are two physics problems: problem A and problem B. During each time step, there is a Picard iteration between problem A and problem B. During each Picard step, you solve problem A (or B) with the solution (U_{n-1}) of the previous time step as the initial condition. In the Picard solve stage, I know the solution (\bar{U}_{n}) of the current time step but from the previous Picard iteration. Use \bar{U}_{n}) instead of U_{n-1} as the initial guess for SNES will have a better convergence for me. Thanks, Fande, On Fri, Apr 3, 2020 at 1:10 PM Jed Brown wrote: > This sounds like you're talking about a starting procedure for a DAE (or > near-singular ODE)? > > Fande Kong writes: > > > Hi All, > > > > TSSetSolution will set an initial condition for the current TSSolve(). > What > > should I do if I want to set an initial guess for the current solution > that > > is different from the initial condition? The initial guess is supposed > to > > be really close to the current solution, and then will accelerate my > solver. > > > > In other words, TSSetSolution will set "U_{n-1}", and now we call TSSolve > > to figure out "U_{n}". If I know something about "U_{n}", and I want to > set > > "\bar{U}_{n}" as the initial guess of "U_{n}" when computing "U_{n}". > > > > > > Thanks, > > > > Fande, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Apr 3 17:29:49 2020 From: jed at jedbrown.org (Jed Brown) Date: Fri, 03 Apr 2020 16:29:49 -0600 Subject: [petsc-users] How to set an initial guess for TS In-Reply-To: References: <87y2rcbnca.fsf@jedbrown.org> Message-ID: <87d08ob3tu.fsf@jedbrown.org> Oh, you just want an initial guess for SNES? Does it work to pull out the SNES and SNESSetComputeInitialGuess? Fande Kong writes: > No. I am working on a transient loosely coupled multiphysics simulation. > Assume there are two physics problems: problem A and problem B. During each > time step, there is a Picard iteration between problem A and problem B. > During each Picard step, you solve problem A (or B) with the solution > (U_{n-1}) of the previous time step as the initial condition. In the Picard > solve stage, I know the solution (\bar{U}_{n}) of the current time step but > from the previous Picard iteration. Use \bar{U}_{n}) instead of U_{n-1} as > the initial guess for SNES will have a better convergence for me. > > Thanks, > > Fande, > > > On Fri, Apr 3, 2020 at 1:10 PM Jed Brown wrote: > >> This sounds like you're talking about a starting procedure for a DAE (or >> near-singular ODE)? >> >> Fande Kong writes: >> >> > Hi All, >> > >> > TSSetSolution will set an initial condition for the current TSSolve(). >> What >> > should I do if I want to set an initial guess for the current solution >> that >> > is different from the initial condition? The initial guess is supposed >> to >> > be really close to the current solution, and then will accelerate my >> solver. >> > >> > In other words, TSSetSolution will set "U_{n-1}", and now we call TSSolve >> > to figure out "U_{n}". If I know something about "U_{n}", and I want to >> set >> > "\bar{U}_{n}" as the initial guess of "U_{n}" when computing "U_{n}". >> > >> > >> > Thanks, >> > >> > Fande, >> From eda.oktay at metu.edu.tr Sat Apr 4 03:39:13 2020 From: eda.oktay at metu.edu.tr (Eda Oktay) Date: Sat, 4 Apr 2020 11:39:13 +0300 Subject: [petsc-users] Gather and Broadcast Parallel Vectors in k-means algorithm Message-ID: Hi all, I created a parallel vector UV, by using VecDuplicateVecs since I need row vectors of a matrix. However, I need the whole vector be in all processors, which means I need to gather all and broadcast them to all processors. To gather, I tried to use VecStrideGatherAll: Vec UVG; VecStrideGatherAll(UV,UVG,INSERT_VALUES); VecView(UVG,PETSC_VIEWER_STDOUT_WORLD); however when I try to view the vector, I get the following error. [3]PETSC ERROR: Invalid argument [3]PETSC ERROR: Wrong type of object: Parameter # 1 [3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [3]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 [3]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 11:22:54 2020 [3]PETSC ERROR: Wrong type of object: Parameter # 1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 [0]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 11:22:54 2020 [0]PETSC ERROR: Configure options --download-mpich --download-openblas --download-slepc --download-metis --download-parmetis --download-chaco --with-X=1 [0]PETSC ERROR: #1 VecStrideGatherAll() line 646 in /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c ./clustering_son_final_edgecut_without_parmetis on a arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 11:22:54 2020 [1]PETSC ERROR: Configure options --download-mpich --download-openblas --download-slepc --download-metis --download-parmetis --download-chaco --with-X=1 [1]PETSC ERROR: #1 VecStrideGatherAll() line 646 in /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c Configure options --download-mpich --download-openblas --download-slepc --download-metis --download-parmetis --download-chaco --with-X=1 [3]PETSC ERROR: #1 VecStrideGatherAll() line 646 in /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c I couldn't understand why I am getting this error. Is this because of UV being created by VecDuplicateVecs? How can I solve this problem? The other question is broadcasting. After gathering all elements of the vector UV, I need to broadcast them to all processors. I found PetscSFBcastBegin. However, I couldn't understand the PetscSF concept properly. I couldn't adjust my question to the star forest concept. My problem is: If I have 4 processors, I create a matrix whose columns are 4 smallest eigenvectors, say of size 72. Then by defining each row of this matrix as a vector, I cluster them by using k-means clustering algorithm. For now, I cluster them by using MATLAB and I obtain a vector showing which row vector is in which cluster. After getting this vector, to cluster row vectors according to this information, all processors need to have all of the row vectors. According to this problem, how can I use the star forest concept? I will be glad if you can help me about this problem since I don't have enough knowledge about graph theory. An if you have any idea about how can I use k-means algorithm in a more practical way, please let me know. Thanks! Eda -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Sat Apr 4 08:37:40 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sat, 4 Apr 2020 08:37:40 -0500 Subject: [petsc-users] Gather and Broadcast Parallel Vectors in k-means algorithm In-Reply-To: References: Message-ID: Check https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecScatterCreateToAll.html to see if it meets your needs. --Junchao Zhang On Sat, Apr 4, 2020 at 3:39 AM Eda Oktay wrote: > Hi all, > > I created a parallel vector UV, by using VecDuplicateVecs since I need row > vectors of a matrix. However, I need the whole vector be in all processors, > which means I need to gather all and broadcast them to all processors. To > gather, I tried to use VecStrideGatherAll: > > Vec UVG; > VecStrideGatherAll(UV,UVG,INSERT_VALUES); > VecView(UVG,PETSC_VIEWER_STDOUT_WORLD); > > however when I try to view the vector, I get the following error. > > [3]PETSC ERROR: Invalid argument > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > [3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [3]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > [3]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > 11:22:54 2020 > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > [0]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > 11:22:54 2020 > [0]PETSC ERROR: Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [0]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > ./clustering_son_final_edgecut_without_parmetis on a arch-linux2-c-debug > named localhost.localdomain by edaoktay Sat Apr 4 11:22:54 2020 > [1]PETSC ERROR: Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [1]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > Configure options --download-mpich --download-openblas --download-slepc > --download-metis --download-parmetis --download-chaco --with-X=1 > [3]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > > I couldn't understand why I am getting this error. Is this because of UV > being created by VecDuplicateVecs? How can I solve this problem? > > The other question is broadcasting. After gathering all elements of the > vector UV, I need to broadcast them to all processors. I found > PetscSFBcastBegin. However, I couldn't understand the PetscSF concept > properly. I couldn't adjust my question to the star forest concept. > > My problem is: If I have 4 processors, I create a matrix whose columns are > 4 smallest eigenvectors, say of size 72. Then by defining each row of this > matrix as a vector, I cluster them by using k-means clustering algorithm. > For now, I cluster them by using MATLAB and I obtain a vector showing which > row vector is in which cluster. After getting this vector, to cluster row > vectors according to this information, all processors need to have all of > the row vectors. > > According to this problem, how can I use the star forest concept? > > I will be glad if you can help me about this problem since I don't have > enough knowledge about graph theory. An if you have any idea about how can > I use k-means algorithm in a more practical way, please let me know. > > Thanks! > > Eda > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat Apr 4 12:49:29 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 4 Apr 2020 13:49:29 -0400 Subject: [petsc-users] adaptive TS question Message-ID: I have a plasma problem of the form dU/dt+ F(U,t) = S(t). U is the density of species in a plasma. S is, say, a step function scaling of a Maxwellian for two of the species. Using 2 or 3 species. I'm happy with it for dU/dt+ F(U,t) = 0. My test problem starts with a time step of about O(1) and goes to a quasi equilibrium with a time step about O(100). But when the source terms kick in the time step goes way down. This seems strange because dU/dt = S(t) just adds S*time to U. (I have verified that with U=0 one step does just add S * dt to U). I use these parameters. FWIW, this TS does three stages, approximates the error and adjusts accordingly. I notice that my S (RHS) function is called after the first stage. So this algorithm solves dU/dt+ F(U,t) = 0 in the first stage. Apparently. Any ideas? Thanks, Mark -ts_type arkimex -ts_exact_final_time stepover -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-6 -ts_dt 1e-1 -ts_adapt_clip .25,1.1 -ts_adapt_scale_solve_failed 0.75 -ts_adapt_time_step_increase_delay 5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yyang85 at stanford.edu Sun Apr 5 05:28:05 2020 From: yyang85 at stanford.edu (Yuyun Yang) Date: Sun, 5 Apr 2020 10:28:05 +0000 Subject: [petsc-users] adaptive TS for IMEX Message-ID: Hello team, I wonder whether TS currently can do error control for both explicit and implicit Runge-Kutta methods? I have a multi-physics problem that have some fields integrated explicitly and others implicitly, and sometimes they operate on very different time scales. The ODE solver I currently have (not PETSc) has error control only on the explicit variables and use the computed dt for the implicit variables, which results in some pretty sudden jumps in solutions. I wonder if there is an example in TS that could solve this problem? Thank you! Yuyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Apr 5 06:05:15 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 5 Apr 2020 07:05:15 -0400 Subject: [petsc-users] adaptive TS for IMEX In-Reply-To: References: Message-ID: See the manual, for details, there are a lot of options here. I use this and adaptive seems to get turned on my default: -ts_type arkimex -ts_arkimex_type 1bee and -ts_adapt_monitor -info :tsadapt On Sun, Apr 5, 2020 at 6:28 AM Yuyun Yang wrote: > Hello team, > > I wonder whether TS currently can do error control for both explicit and > implicit Runge-Kutta methods? I have a multi-physics problem that have some > fields integrated explicitly and others implicitly, and sometimes they > operate on very different time scales. The ODE solver I currently have (not > PETSc) has error control only on the explicit variables and use the > computed dt for the implicit variables, which results in some pretty sudden > jumps in solutions. I wonder if there is an example in TS that could solve > this problem? > > Thank you! > Yuyun > -------------- next part -------------- An HTML attachment was scrubbed... URL: From franco.dassi at unimib.it Mon Apr 6 06:16:29 2020 From: franco.dassi at unimib.it (Franco Dassi) Date: Mon, 6 Apr 2020 13:16:29 +0200 Subject: [petsc-users] Issue with the installation of Petsc Message-ID: Good morning, We are using your Petsc library to solve linear systems coming from PDEs. However recently we are trying to install PETSC (maint branch) on a linux system but we have some issues. On the one hand, when we use the following configure command ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack --with-mpi=1 --download-superlu_dist --download-mumps --download-hypre --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' --download-scalapack I obtain the following error: =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: checkFortranCompiler from config.setCompilers(config/BuildSystem/config/setCompilers.py:989) ******************************************************************************* OSError while running ./configure ------------------------------------------------------------------------------- Cannot run executables created with FC. If this machine uses a batch system to submit jobs you will need to configure using ./configure with the additional option --with-batch. Otherwise there is problem with the compilers. Can you compile and run code with your compiler '/home/gurst/repositories/petsc/arch-linux-c-opt/bin/mpif90'? ******************************************************************************* On the other hand if we add the --with-batch the library is compiled, but when we try to use it to run the code we obtain the following error (at run time): ./testGBDForMixed3d: symbol lookup error: /home/martinelli/repositories/petsc/arch-linux-c-opt//lib/libmpifort.so.12: undefined symbol: MPIR_F_NeedInit Looking the symbols table in the petsc/arch-linux-c-opt/lib directory, It seems that something is undefined: ~/repositories/petsc/arch-linux-c-opt/lib $ nm -A *.so | grep -n MPIR_F_NeedInit 12591:libfmpich.so: U MPIR_F_NeedInit 18943:libmpi.so:000000000031ab00 D MPIR_F_NeedInit 27046:libmpich.so:000000000031ab00 D MPIR_F_NeedInit 34565:libmpichf90.so: U MPIR_F_NeedInit 40333:libmpifort.so: U MPIR_F_NeedInit 46685:libmpl.so:000000000031ab00 D MPIR_F_NeedInit 54788:libopa.so:000000000031ab00 D MPIR_F_NeedInit Are we doing something wrong? Thank you in advance for your time best Massimiliano and Franco -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 6 09:09:02 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 09:09:02 -0500 (CDT) Subject: [petsc-users] Issue with the installation of Petsc In-Reply-To: References: Message-ID: Can you send configure.log from the first build attempt? Satish On Mon, 6 Apr 2020, Franco Dassi wrote: > Good morning, > > We are using your Petsc library to solve linear systems coming from PDEs. > > However recently we are trying to install PETSC (maint branch) on a linux > system but we have some issues. > > On the one hand, when we use the following configure command > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran > --download-mpich --download-fblaslapack --with-mpi=1 > --download-superlu_dist --download-mumps --download-hypre > --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' > CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 > -march=native -mtune=native' --download-scalapack > > > I obtain the following error: > > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > TESTING: checkFortranCompiler from > config.setCompilers(config/BuildSystem/config/setCompilers.py:989) > > ******************************************************************************* > OSError while running ./configure > ------------------------------------------------------------------------------- > Cannot run executables created with FC. If this machine uses a batch system > to submit jobs you will need to configure using ./configure with the > additional option --with-batch. > Otherwise there is problem with the compilers. Can you compile and run > code with your compiler > '/home/gurst/repositories/petsc/arch-linux-c-opt/bin/mpif90'? > ******************************************************************************* > > > On the other hand if we add the --with-batch the library is compiled, but > when we try to use it to run the code we obtain the following error (at run > time): > > ./testGBDForMixed3d: symbol lookup error: > /home/martinelli/repositories/petsc/arch-linux-c-opt//lib/libmpifort.so.12: > undefined symbol: MPIR_F_NeedInit > > > Looking the symbols table in the petsc/arch-linux-c-opt/lib directory, > It seems that something is undefined: > > ~/repositories/petsc/arch-linux-c-opt/lib $ nm -A *.so | grep -n > MPIR_F_NeedInit > > 12591:libfmpich.so: U MPIR_F_NeedInit > 18943:libmpi.so:000000000031ab00 D MPIR_F_NeedInit > 27046:libmpich.so:000000000031ab00 D MPIR_F_NeedInit > 34565:libmpichf90.so: U MPIR_F_NeedInit > 40333:libmpifort.so: U MPIR_F_NeedInit > 46685:libmpl.so:000000000031ab00 D MPIR_F_NeedInit > 54788:libopa.so:000000000031ab00 D MPIR_F_NeedInit > > Are we doing something wrong? > > Thank you in advance for your time > best > Massimiliano and Franco > From alexprescott at email.arizona.edu Mon Apr 6 10:27:47 2020 From: alexprescott at email.arizona.edu (Alexander B Prescott) Date: Mon, 6 Apr 2020 08:27:47 -0700 Subject: [petsc-users] [EXT]Re: Zero diagonal Error Code with all non-zero diagonal entries In-Reply-To: References: Message-ID: Thank you Matt, you were right about the residual function, the error was subtle. I've gotten the analytical Jacobian running and have some new questions, but I'll start a new thread for that. Best, Alexander On Fri, Apr 3, 2020 at 10:39 AM Matthew Knepley wrote: > *External Email* > On Fri, Apr 3, 2020 at 12:20 PM Alexander B Prescott < > alexprescott at email.arizona.edu> wrote: > >> Hi Matt, >> >> Thanks for your help. I've done as you suggested and it still doesn't >> work -- I get the same error message as before if I add the -snes_fd flag >> to the command line. I also added -info and included that output below. >> Also, a similar error is returned with other PC types, e.g LU. >> > > Great. I now suspect your residual is flawed. I have several suggestions: > > a) It looks like you might have an indexing bug, since your residual > does not depend on dof 0. PETSc uses 0-based indexing. > > b) I think the best initial check is to put in the exact solution and > check that the residual is 0. I do this in SNES ex5 using MMS. > > c) -snes_fd forms the Jacobian without coloring. Giving no options uses > coloring. > > Thanks, > > Matt > > >> [0] PetscInitialize(): PETSc successfully started: number of processors = >> 1 >> [0] PetscGetHostName(): Rejecting domainname, likely is NIS login2.(none) >> [0] PetscInitialize(): Running on machine: login2 >> [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 >> -2080374784 max tags = 268435455 >> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 >> -2080374784 >> [0] DMGetDMSNES(): Creating new DMSNES >> [0] PetscGetHostName(): Rejecting domainname, likely is NIS login2.(none) >> [0] SNESSetFromOptions(): Setting default finite difference Jacobian >> matrix >> [0] DMCreateMatrix_Shell(): Naively creating matrix using global vector >> distribution without preallocation >> [0] MatSetUp(): Warning not preallocating matrix storage >> [0] DMGetDMKSP(): Creating new DMKSP >> 0 SNES Function norm 1.000000000000e+01 >> [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 25 X 25; storage space: 70 >> unneeded,55 used >> [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 >> [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3 >> [0] MatCheckCompressedRow(): Found the ratio (num_zerorows >> 0)/(num_localrows 25) < 0.6. Do not use CompressedRow routines. >> [0] MatSeqAIJCheckInode(): Found 25 nodes out of 25 rows. Not using Inode >> routines >> [0] SNESComputeJacobian(): Rebuilding preconditioner >> [0] PCSetUp(): Setting up PC for first time >> [0] PCSetUp_MG(): Using outer operators to define finest grid operator >> because PCMGGetSmoother(pc,nlevels-1,&ksp);KSPSetOperators(ksp,...); >> was not called. >> [0] PCSetUp(): Setting up PC for first time >> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is >> unchanged >> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is >> unchanged >> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is >> unchanged >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Arguments are incompatible >> [0]PETSC ERROR: Zero diagonal on row 0 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> .... >> >> >> Best, >> Alexander >> >> >> >> On Thu, Apr 2, 2020 at 9:07 PM Matthew Knepley wrote: >> >>> *External Email* >>> On Thu, Apr 2, 2020 at 11:33 PM Alexander B Prescott < >>> alexprescott at email.arizona.edu> wrote: >>> >>>> Hello, >>>> >>>> I am teaching myself how to use Petsc for nonlinear equations and I've >>>> run into a problem that I can't quite figure out. I am trying to use the >>>> matrix coloring routines for the finite difference Jacobian approximation, >>>> and I've followed the steps in the manual to do this. >>>> When I run the program with a MG preconditioner, I get back the error: >>>> >>>> [0]PETSC ERROR: --------------------- Error Message >>>> ---------------------------- >>>> [0]PETSC ERROR: Arguments are incompatible >>>> [0]PETSC ERROR: Zero diagonal on row 0 >>>> ..... >>>> >>>> >>> The easiest thing to try is just comment out all your matrix stuff >>> including SNESSetJacobian(). PETSc will do the coloring automatically. >>> If this works, you know its your coloring. If not, then its something >>> with the MG setup. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> What's interesting is that after I've added non-zero entries to the >>>> matrix with MatrixSetValues() and assembled the matrix >>>> with MatAssemblyBegin() + MatAssemblyEnd(), I can verify that every >>>> diagonal entry is non-zero with a call to MatGetValues. I've included a >>>> relevant code snippet below and I'm happy to send more. Any guidance is >>>> greatly appreciated. >>>> >>>> Command line: >>>> >>>> mpirun -n 1 ./program -snes_view -snes_converged_reason -snes_monitor >>>> -ksp_monitor -ksp_converged_reason -pc_type mg >>>> >>>> Code snippet: >>>> >>>> ierr = FormJacobianColoring(snes,J);CHKERRQ(ierr); // this function >>>> set's matrix values and assembles the matrix >>>> >>>> // removed the code, but this is where I've used MatGetValues() to >>>> ensure that the diagonal of J (as well as other entries) has been set to 1.0 >>>> >>>> ierr = MatColoringCreate(J,&coloring);CHKERRQ(ierr); >>>> ierr = MatColoringSetType(coloring,MATCOLORINGSL);CHKERRQ(ierr); >>>> ierr = MatColoringSetFromOptions(coloring);CHKERRQ(ierr); >>>> ierr = MatColoringApply(coloring,&iscoloring);CHKERRQ(ierr); >>>> ierr = MatColoringDestroy(&coloring);CHKERRQ(ierr); >>>> /* >>>> Create the data structure that SNESComputeJacobianDefaultColor() uses >>>> to compute the actual Jacobians via finite differences. >>>> */ >>>> ierr = MatFDColoringCreate(J,iscoloring,&fdcoloring);CHKERRQ(ierr); >>>> ierr = MatFDColoringSetFunction(fdcoloring,(PetscErrorCode >>>> (*)(void))FormFunction,NULL);CHKERRQ(ierr); >>>> ierr = MatFDColoringSetFromOptions(fdcoloring);CHKERRQ(ierr); >>>> ierr = MatFDColoringSetUp(J,iscoloring,fdcoloring);CHKERRQ(ierr); >>>> ierr = >>>> SNESSetJacobian(snes,J,J,SNESComputeJacobianDefaultColor,fdcoloring);CHKERRQ(ierr); >>>> >>>> ierr = SNESSetFromOptions(snes);CHKERRQ(ierr); >>>> >>>> ierr = VecSet(x,0.001);CHKERRQ(ierr); >>>> ierr = SNESSolve(snes,NULL,x);CHKERRQ(ierr); >>>> >>>> >>>> >>>> And here's the full error code: >>>> >>>> [0]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [0]PETSC ERROR: Arguments are incompatible >>>> [0]PETSC ERROR: Zero diagonal on row 0 >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>> for trouble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018 >>>> [0]PETSC ERROR: ./petsc_flowroute on a named i0n22 by alexprescott Thu >>>> Apr 2 20:14:01 2020 >>>> [0]PETSC ERROR: Configure options >>>> --prefix=/cm/shared/uaapps/petsc/3.10.3 --download-fblaslapack >>>> --download-metis --download-parmetis --download-hypre PETSC_ARCH=linux-gnu >>>> --with-debugging=no COPTFLAGS=-O3 CXXOPTFLAGS=-O3 >>>> [0]PETSC ERROR: #1 MatInvertDiagonal_SeqAIJ() line 1662 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c >>>> [0]PETSC ERROR: #2 MatSOR_SeqAIJ() line 1693 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/impls/aij/seq/aij.c >>>> [0]PETSC ERROR: #3 MatSOR() line 3932 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: #4 PCApply_SOR() line 31 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/sor/sor.c >>>> [0]PETSC ERROR: #5 PCApply() line 462 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: #6 KSP_PCApply() line 281 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h >>>> [0]PETSC ERROR: #7 KSPInitialResidual() line 67 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c >>>> [0]PETSC ERROR: #8 KSPSolve_GMRES() line 233 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c >>>> [0]PETSC ERROR: #9 KSPSolve() line 780 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 367 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/cheby/cheby.c >>>> [0]PETSC ERROR: #11 KSPSolve() line 780 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: #12 PCMGMCycle_Private() line 20 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c >>>> [0]PETSC ERROR: #13 PCApply_MG() line 377 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/impls/mg/mg.c >>>> [0]PETSC ERROR: #14 PCApply() line 462 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: #15 KSP_PCApply() line 281 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/include/petsc/private/kspimpl.h >>>> [0]PETSC ERROR: #16 KSPInitialResidual() line 67 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itres.c >>>> [0]PETSC ERROR: #17 KSPSolve_GMRES() line 233 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/impls/gmres/gmres.c >>>> [0]PETSC ERROR: #18 KSPSolve() line 780 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: #19 SNESSolve_NEWTONLS() line 224 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/impls/ls/ls.c >>>> [0]PETSC ERROR: #20 SNESSolve() line 4397 in >>>> /cm/local/uabuild/petsc/petsc-3.10.3/src/snes/interface/snes.c >>>> [0]PETSC ERROR: #21 main() line 705 in >>>> /home/u16/alexprescott/petsc_example/flowroute/petsc_flowroute.c >>>> [0]PETSC ERROR: PETSc Option Table entries: >>>> [0]PETSC ERROR: -ksp_converged_reason >>>> [0]PETSC ERROR: -ksp_monitor >>>> [0]PETSC ERROR: -pc_type mg >>>> [0]PETSC ERROR: -snes_converged_reason >>>> [0]PETSC ERROR: -snes_monitor >>>> [0]PETSC ERROR: -snes_view >>>> [0]PETSC ERROR: ----------------End of Error Message -------send entire >>>> error message to petsc-maint at mcs.anl.gov---------- >>>> application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 >>>> >>>> >>>> Printed matrix for a 25 x 25 example >>>> >>>> 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 >>>> 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 >>>> 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 >>>> 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 >>>> 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 >>>> 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 >>>> 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 >>>> 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 >>>> 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 >>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 >>>> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 >>>> >>>> >>>> >>>> Best, >>>> Alexander >>>> >>>> >>>> >>>> -- >>>> Alexander Prescott >>>> alexprescott at email.arizona.edu >>>> PhD Candidate, The University of Arizona >>>> Department of Geosciences >>>> 1040 E. 4th Street >>>> Tucson, AZ, 85721 >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> >> >> -- >> Alexander Prescott >> alexprescott at email.arizona.edu >> PhD Candidate, The University of Arizona >> Department of Geosciences >> 1040 E. 4th Street >> Tucson, AZ, 85721 >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Apr 6 10:45:36 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 6 Apr 2020 11:45:36 -0400 Subject: [petsc-users] error rebasing with master - DMPlex - Hypre Message-ID: 11:41 mark/feature-xgc-interface-rebase<> ~/Codes/petsc$ make PETSC_DIR=/Users/markadams/Codes/petsc PETSC_ARCH=arch-macosx-gnu-g check Running check examples to verify correct installation Using PETSC_DIR=/Users/markadams/Codes/petsc and PETSC_ARCH=arch-macosx-gnu-g make[3]: [ex19.PETSc] Error 2 (ignored) *******************Error detected during compile or link!******************* See http://www.mcs.anl.gov/petsc/documentation/faq.html /Users/markadams/Codes/petsc/src/snes/tutorials ex19 ********************************************************************************* /usr/local/Cellar/mpich/3.3.2/bin/mpicc -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g -O0 -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g -O0 -I/Users/markadams/Codes/petsc/include -I/Users/markadams/Codes/petsc/arch-macosx-gnu-g/include -I/opt/X11/include -I/usr/local/Cellar/mpich/3.3.2/include ex19.c -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -Wl,-rpath,/usr/local/Cellar/mpich/3.3.2/lib -L/usr/local/Cellar/mpich/3.3.2/lib -Wl,-rpath,/usr/local/Cellar/gcc/9.3.0/lib/gcc/9/gcc/x86_64-apple-darwin19/9.3.0 -L/usr/local/Cellar/gcc/9.3.0/lib/gcc/9/gcc/x86_64-apple-darwin19/9.3.0 -Wl,-rpath,/usr/local/Cellar/gcc/9.3.0/lib/gcc/9 -L/usr/local/Cellar/gcc/9.3.0/lib/gcc/9 -lpetsc -lfftw3_mpi -lfftw3 -lp4est -lsc -llapack -lblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -ltriangle -lz -lX11 -lc++ -ldl -lmpifort -lmpi -lpmpi -lgfortran -lquadmath -lm -lc++ -ldl -o ex19 Undefined symbols for architecture x86_64: "_DMPlexGetHybridBounds", referenced from: import-atom in libpetsc.dylib ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[4]: *** [ex19] Error 1 make[3]: [ex47.PETSc] Error 2 (ignored) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 107232 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 3612807 bytes Desc: not available URL: From shrirang.abhyankar at pnnl.gov Mon Apr 6 11:13:23 2020 From: shrirang.abhyankar at pnnl.gov (Abhyankar, Shrirang G) Date: Mon, 6 Apr 2020 16:13:23 +0000 Subject: [petsc-users] DMCreateSectionSF Message-ID: <4D249062-8B29-46B4-954B-E407235A5D28@pnnl.gov> I am getting an error for DMCreateSectionSF() with latest petsc-master. I see a DMCreateDefaultSF in petscdm.h. Has DMCreateSectionSF been renamed to DMCreateDefaultSF? Thanks, Shri -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaramillopalma at gmail.com Mon Apr 6 11:53:57 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 13:53:57 -0300 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc Message-ID: hello everyone, I have a fresh installation of the 3.13.0 version with pastix. Like with previous versions, I'm using the options --with-x11=0 --with-x=0 --with-windows-graphics=0 to disable X11 however, when compiling my program foo and doing $ ldd foo between the linked libraries there appear: libXNVCtrl.so.0 and libX11.so.6 the first one related to NVIDIA. I observed that this does not happen when installing PETSc without hwloc. In this new version, PETSc requires to install hwloc when trying to install pastix. In previous versions of PETSc (eg 3.11.2) that wasn't necessary. I'm working in a cluster where I have no access to these X11-related libraries and that's why I need them not be linked. Is it there some way to disable X11 when installing hwloc? maybe enforcing some configuration variables when installing it through petsc or installing it independently? thanks a lot! Below the configuration command of the two installations I've tried with the 3.13.0 version. =================== WITH PASTIX =================== ./configure --with-make-np=20 --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 --doCleanup=0 \ --with-mpi=1 \ --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ --download-scalapack \ --download-openblas \ --download-mumps \ --download-superlu_dist \ --download-metis \ --download-parmetis \ --download-ptscotch \ --download-hypre \ *--download-pastix \--download-hwloc \* --with-64-bit-indices=1 \ LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ --with-cxx-dialect=C++11 \ --with-x11=0 --with-x=0 --with-windows-graphics=0 \ COPTFLAGS="-O3 -march=native -mtune=native" \ CXXOPTFLAGS="-O3 -march=native -mtune=native" \ FOPTFLAGS="-O3 -march=native -mtune=native" =================== WITHOUT PASTIX =================== the same as above but the options "--download-pastix --download-hwloc" ====================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 6 11:57:54 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Apr 2020 12:57:54 -0400 Subject: [petsc-users] error rebasing with master - DMPlex - Hypre In-Reply-To: References: Message-ID: I removed that function. Is it still in your Fortran bindings? Matt On Mon, Apr 6, 2020 at 11:47 AM Mark Adams wrote: > 11:41 mark/feature-xgc-interface-rebase<> ~/Codes/petsc$ make > PETSC_DIR=/Users/markadams/Codes/petsc PETSC_ARCH=arch-macosx-gnu-g check > Running check examples to verify correct installation > Using PETSC_DIR=/Users/markadams/Codes/petsc and > PETSC_ARCH=arch-macosx-gnu-g > make[3]: [ex19.PETSc] Error 2 (ignored) > *******************Error detected during compile or > link!******************* > See http://www.mcs.anl.gov/petsc/documentation/faq.html > /Users/markadams/Codes/petsc/src/snes/tutorials ex19 > > ********************************************************************************* > /usr/local/Cellar/mpich/3.3.2/bin/mpicc -Wl,-multiply_defined,suppress > -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs > -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g -O0 > -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress > -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind > -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden > -g -O0 -I/Users/markadams/Codes/petsc/include > -I/Users/markadams/Codes/petsc/arch-macosx-gnu-g/include -I/opt/X11/include > -I/usr/local/Cellar/mpich/3.3.2/include ex19.c > -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib > -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib > -Wl,-rpath,/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib > -L/Users/markadams/Codes/petsc/arch-macosx-gnu-g/lib > -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib > -Wl,-rpath,/usr/local/Cellar/mpich/3.3.2/lib > -L/usr/local/Cellar/mpich/3.3.2/lib > -Wl,-rpath,/usr/local/Cellar/gcc/9.3.0/lib/gcc/9/gcc/x86_64-apple-darwin19/9.3.0 > -L/usr/local/Cellar/gcc/9.3.0/lib/gcc/9/gcc/x86_64-apple-darwin19/9.3.0 > -Wl,-rpath,/usr/local/Cellar/gcc/9.3.0/lib/gcc/9 > -L/usr/local/Cellar/gcc/9.3.0/lib/gcc/9 -lpetsc -lfftw3_mpi -lfftw3 -lp4est > -lsc -llapack -lblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 > -lparmetis -lmetis -ltriangle -lz -lX11 -lc++ -ldl -lmpifort -lmpi -lpmpi > -lgfortran -lquadmath -lm -lc++ -ldl -o ex19 > Undefined symbols for architecture x86_64: > "_DMPlexGetHybridBounds", referenced from: > import-atom in libpetsc.dylib > ld: symbol(s) not found for architecture x86_64 > clang: error: linker command failed with exit code 1 (use -v to see > invocation) > make[4]: *** [ex19] Error 1 > make[3]: [ex47.PETSc] Error 2 (ignored) > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 6 11:58:58 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Apr 2020 12:58:58 -0400 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: On Mon, Apr 6, 2020 at 12:55 PM Alfredo Jaramillo wrote: > hello everyone, > > I have a fresh installation of the 3.13.0 version with pastix. Like with > previous versions, I'm using the options > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > to disable X11 > > however, when compiling my program foo and doing > > $ ldd foo > > between the linked libraries there appear: > libXNVCtrl.so.0 and libX11.so.6 > > the first one related to NVIDIA. I observed that this does not happen when > installing PETSc without hwloc. In this new version, PETSc requires to > install hwloc when trying to install pastix. In previous versions of PETSc > (eg 3.11.2) that wasn't necessary. > > I'm working in a cluster where I have no access to these X11-related > libraries and that's why I need them not be linked. Is it there some way to > disable X11 when installing hwloc? maybe enforcing some configuration > variables when installing it through petsc or installing it independently? > Can you send configure.log? Thanks, Matt > thanks a lot! > > Below the configuration command of the two installations I've tried with > the 3.13.0 version. > > =================== WITH PASTIX =================== > > ./configure --with-make-np=20 > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 > --doCleanup=0 \ > --with-mpi=1 \ > --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > --download-scalapack \ > --download-openblas \ > --download-mumps \ > --download-superlu_dist \ > --download-metis \ > --download-parmetis \ > --download-ptscotch \ > --download-hypre \ > > *--download-pastix \--download-hwloc \* > --with-64-bit-indices=1 \ > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > --with-cxx-dialect=C++11 \ > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > COPTFLAGS="-O3 -march=native -mtune=native" \ > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > FOPTFLAGS="-O3 -march=native -mtune=native" > > =================== WITHOUT PASTIX =================== > > the same as above but the options "--download-pastix --download-hwloc" > > ====================================================== > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaramillopalma at gmail.com Mon Apr 6 12:02:09 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 14:02:09 -0300 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Sure, here it is. thx On Mon, Apr 6, 2020 at 1:59 PM Matthew Knepley wrote: > On Mon, Apr 6, 2020 at 12:55 PM Alfredo Jaramillo < > ajaramillopalma at gmail.com> wrote: > >> hello everyone, >> >> I have a fresh installation of the 3.13.0 version with pastix. Like with >> previous versions, I'm using the options >> >> --with-x11=0 --with-x=0 --with-windows-graphics=0 >> >> to disable X11 >> >> however, when compiling my program foo and doing >> >> $ ldd foo >> >> between the linked libraries there appear: >> libXNVCtrl.so.0 and libX11.so.6 >> >> the first one related to NVIDIA. I observed that this does not happen >> when installing PETSc without hwloc. In this new version, PETSc requires to >> install hwloc when trying to install pastix. In previous versions of PETSc >> (eg 3.11.2) that wasn't necessary. >> >> I'm working in a cluster where I have no access to these X11-related >> libraries and that's why I need them not be linked. Is it there some way to >> disable X11 when installing hwloc? maybe enforcing some configuration >> variables when installing it through petsc or installing it independently? >> > > Can you send configure.log? > > Thanks, > > Matt > > >> thanks a lot! >> >> Below the configuration command of the two installations I've tried with >> the 3.13.0 version. >> >> =================== WITH PASTIX =================== >> >> ./configure --with-make-np=20 >> --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 >> --doCleanup=0 \ >> --with-mpi=1 \ >> --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ >> --download-scalapack \ >> --download-openblas \ >> --download-mumps \ >> --download-superlu_dist \ >> --download-metis \ >> --download-parmetis \ >> --download-ptscotch \ >> --download-hypre \ >> >> *--download-pastix \--download-hwloc \* >> --with-64-bit-indices=1 \ >> LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ >> --with-cxx-dialect=C++11 \ >> --with-x11=0 --with-x=0 --with-windows-graphics=0 \ >> COPTFLAGS="-O3 -march=native -mtune=native" \ >> CXXOPTFLAGS="-O3 -march=native -mtune=native" \ >> FOPTFLAGS="-O3 -march=native -mtune=native" >> >> =================== WITHOUT PASTIX =================== >> >> the same as above but the options "--download-pastix --download-hwloc" >> >> ====================================================== >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 7698102 bytes Desc: not available URL: From balay at mcs.anl.gov Mon Apr 6 12:12:38 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 12:12:38 -0500 (CDT) Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: you can try: --download-pastix --download-hwloc --download-hwloc-configure-arguments=--without-x We should fix this to automatically use --with-x=0/1 Satish On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > hello everyone, > > I have a fresh installation of the 3.13.0 version with pastix. Like with > previous versions, I'm using the options > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > to disable X11 > > however, when compiling my program foo and doing > > $ ldd foo > > between the linked libraries there appear: > libXNVCtrl.so.0 and libX11.so.6 > > the first one related to NVIDIA. I observed that this does not happen when > installing PETSc without hwloc. In this new version, PETSc requires to > install hwloc when trying to install pastix. In previous versions of PETSc > (eg 3.11.2) that wasn't necessary. > > I'm working in a cluster where I have no access to these X11-related > libraries and that's why I need them not be linked. Is it there some way to > disable X11 when installing hwloc? maybe enforcing some configuration > variables when installing it through petsc or installing it independently? > > thanks a lot! > > Below the configuration command of the two installations I've tried with > the 3.13.0 version. > > =================== WITH PASTIX =================== > > ./configure --with-make-np=20 > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 > --doCleanup=0 \ > --with-mpi=1 \ > --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > --download-scalapack \ > --download-openblas \ > --download-mumps \ > --download-superlu_dist \ > --download-metis \ > --download-parmetis \ > --download-ptscotch \ > --download-hypre \ > > *--download-pastix \--download-hwloc \* > --with-64-bit-indices=1 \ > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > --with-cxx-dialect=C++11 \ > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > COPTFLAGS="-O3 -march=native -mtune=native" \ > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > FOPTFLAGS="-O3 -march=native -mtune=native" > > =================== WITHOUT PASTIX =================== > > the same as above but the options "--download-pastix --download-hwloc" > > ====================================================== > From knepley at gmail.com Mon Apr 6 12:12:45 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Apr 2020 13:12:45 -0400 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Hmm, there is no sign of libX11 in that log. Can you send me $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/petscvariables Thanks, Matt On Mon, Apr 6, 2020 at 1:02 PM Alfredo Jaramillo wrote: > Sure, here it is. > thx > > On Mon, Apr 6, 2020 at 1:59 PM Matthew Knepley wrote: > >> On Mon, Apr 6, 2020 at 12:55 PM Alfredo Jaramillo < >> ajaramillopalma at gmail.com> wrote: >> >>> hello everyone, >>> >>> I have a fresh installation of the 3.13.0 version with pastix. Like with >>> previous versions, I'm using the options >>> >>> --with-x11=0 --with-x=0 --with-windows-graphics=0 >>> >>> to disable X11 >>> >>> however, when compiling my program foo and doing >>> >>> $ ldd foo >>> >>> between the linked libraries there appear: >>> libXNVCtrl.so.0 and libX11.so.6 >>> >>> the first one related to NVIDIA. I observed that this does not happen >>> when installing PETSc without hwloc. In this new version, PETSc requires to >>> install hwloc when trying to install pastix. In previous versions of PETSc >>> (eg 3.11.2) that wasn't necessary. >>> >>> I'm working in a cluster where I have no access to these X11-related >>> libraries and that's why I need them not be linked. Is it there some way to >>> disable X11 when installing hwloc? maybe enforcing some configuration >>> variables when installing it through petsc or installing it independently? >>> >> >> Can you send configure.log? >> >> Thanks, >> >> Matt >> >> >>> thanks a lot! >>> >>> Below the configuration command of the two installations I've tried with >>> the 3.13.0 version. >>> >>> =================== WITH PASTIX =================== >>> >>> ./configure --with-make-np=20 >>> --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 >>> --doCleanup=0 \ >>> --with-mpi=1 \ >>> --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ >>> --download-scalapack \ >>> --download-openblas \ >>> --download-mumps \ >>> --download-superlu_dist \ >>> --download-metis \ >>> --download-parmetis \ >>> --download-ptscotch \ >>> --download-hypre \ >>> >>> *--download-pastix \--download-hwloc \* >>> --with-64-bit-indices=1 \ >>> LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ >>> --with-cxx-dialect=C++11 \ >>> --with-x11=0 --with-x=0 --with-windows-graphics=0 \ >>> COPTFLAGS="-O3 -march=native -mtune=native" \ >>> CXXOPTFLAGS="-O3 -march=native -mtune=native" \ >>> FOPTFLAGS="-O3 -march=native -mtune=native" >>> >>> =================== WITHOUT PASTIX =================== >>> >>> the same as above but the options "--download-pastix --download-hwloc" >>> >>> ====================================================== >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaramillopalma at gmail.com Mon Apr 6 12:17:46 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 14:17:46 -0300 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: attaching the file thanks! On Mon, Apr 6, 2020 at 2:12 PM Matthew Knepley wrote: > Hmm, there is no sign of libX11 in that log. Can you send me > > $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/petscvariables > > Thanks, > > Matt > > On Mon, Apr 6, 2020 at 1:02 PM Alfredo Jaramillo < > ajaramillopalma at gmail.com> wrote: > >> Sure, here it is. >> thx >> >> On Mon, Apr 6, 2020 at 1:59 PM Matthew Knepley wrote: >> >>> On Mon, Apr 6, 2020 at 12:55 PM Alfredo Jaramillo < >>> ajaramillopalma at gmail.com> wrote: >>> >>>> hello everyone, >>>> >>>> I have a fresh installation of the 3.13.0 version with pastix. Like >>>> with previous versions, I'm using the options >>>> >>>> --with-x11=0 --with-x=0 --with-windows-graphics=0 >>>> >>>> to disable X11 >>>> >>>> however, when compiling my program foo and doing >>>> >>>> $ ldd foo >>>> >>>> between the linked libraries there appear: >>>> libXNVCtrl.so.0 and libX11.so.6 >>>> >>>> the first one related to NVIDIA. I observed that this does not happen >>>> when installing PETSc without hwloc. In this new version, PETSc requires to >>>> install hwloc when trying to install pastix. In previous versions of PETSc >>>> (eg 3.11.2) that wasn't necessary. >>>> >>>> I'm working in a cluster where I have no access to these X11-related >>>> libraries and that's why I need them not be linked. Is it there some way to >>>> disable X11 when installing hwloc? maybe enforcing some configuration >>>> variables when installing it through petsc or installing it independently? >>>> >>> >>> Can you send configure.log? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> thanks a lot! >>>> >>>> Below the configuration command of the two installations I've tried >>>> with the 3.13.0 version. >>>> >>>> =================== WITH PASTIX =================== >>>> >>>> ./configure --with-make-np=20 >>>> --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 >>>> --doCleanup=0 \ >>>> --with-mpi=1 \ >>>> --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 >>>> \ >>>> --download-scalapack \ >>>> --download-openblas \ >>>> --download-mumps \ >>>> --download-superlu_dist \ >>>> --download-metis \ >>>> --download-parmetis \ >>>> --download-ptscotch \ >>>> --download-hypre \ >>>> >>>> *--download-pastix \--download-hwloc \* >>>> --with-64-bit-indices=1 \ >>>> LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ >>>> --with-cxx-dialect=C++11 \ >>>> --with-x11=0 --with-x=0 --with-windows-graphics=0 \ >>>> COPTFLAGS="-O3 -march=native -mtune=native" \ >>>> CXXOPTFLAGS="-O3 -march=native -mtune=native" \ >>>> FOPTFLAGS="-O3 -march=native -mtune=native" >>>> >>>> =================== WITHOUT PASTIX =================== >>>> >>>> the same as above but the options "--download-pastix --download-hwloc" >>>> >>>> ====================================================== >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petscvariables Type: application/octet-stream Size: 11891 bytes Desc: not available URL: From ajaramillopalma at gmail.com Mon Apr 6 12:28:53 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 14:28:53 -0300 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: hello Satish, adding --download-hwloc-configure-arguments=--without-x worked perfectly thank you! On Mon, Apr 6, 2020 at 2:12 PM Satish Balay wrote: > you can try: > > --download-pastix --download-hwloc > --download-hwloc-configure-arguments=--without-x > > We should fix this to automatically use --with-x=0/1 > > Satish > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > hello everyone, > > > > I have a fresh installation of the 3.13.0 version with pastix. Like with > > previous versions, I'm using the options > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > to disable X11 > > > > however, when compiling my program foo and doing > > > > $ ldd foo > > > > between the linked libraries there appear: > > libXNVCtrl.so.0 and libX11.so.6 > > > > the first one related to NVIDIA. I observed that this does not happen > when > > installing PETSc without hwloc. In this new version, PETSc requires to > > install hwloc when trying to install pastix. In previous versions of > PETSc > > (eg 3.11.2) that wasn't necessary. > > > > I'm working in a cluster where I have no access to these X11-related > > libraries and that's why I need them not be linked. Is it there some way > to > > disable X11 when installing hwloc? maybe enforcing some configuration > > variables when installing it through petsc or installing it > independently? > > > > thanks a lot! > > > > Below the configuration command of the two installations I've tried with > > the 3.13.0 version. > > > > =================== WITH PASTIX =================== > > > > ./configure --with-make-np=20 > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 > > --doCleanup=0 \ > > --with-mpi=1 \ > > --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > --download-scalapack \ > > --download-openblas \ > > --download-mumps \ > > --download-superlu_dist \ > > --download-metis \ > > --download-parmetis \ > > --download-ptscotch \ > > --download-hypre \ > > > > *--download-pastix \--download-hwloc \* > > --with-64-bit-indices=1 \ > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > --with-cxx-dialect=C++11 \ > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > =================== WITHOUT PASTIX =================== > > > > the same as above but the options "--download-pastix --download-hwloc" > > > > ====================================================== > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 6 12:29:13 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Apr 2020 13:29:13 -0400 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Okay, what it must be is that hwloc dynamically links to X11, so its never given explicitly anywhere by us, but hwloc sucks it in. Satish is right about the fix. Thanks, Matt On Mon, Apr 6, 2020 at 1:18 PM Alfredo Jaramillo wrote: > attaching the file > thanks! > > On Mon, Apr 6, 2020 at 2:12 PM Matthew Knepley wrote: > >> Hmm, there is no sign of libX11 in that log. Can you send me >> >> $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/petscvariables >> >> Thanks, >> >> Matt >> >> On Mon, Apr 6, 2020 at 1:02 PM Alfredo Jaramillo < >> ajaramillopalma at gmail.com> wrote: >> >>> Sure, here it is. >>> thx >>> >>> On Mon, Apr 6, 2020 at 1:59 PM Matthew Knepley >>> wrote: >>> >>>> On Mon, Apr 6, 2020 at 12:55 PM Alfredo Jaramillo < >>>> ajaramillopalma at gmail.com> wrote: >>>> >>>>> hello everyone, >>>>> >>>>> I have a fresh installation of the 3.13.0 version with pastix. Like >>>>> with previous versions, I'm using the options >>>>> >>>>> --with-x11=0 --with-x=0 --with-windows-graphics=0 >>>>> >>>>> to disable X11 >>>>> >>>>> however, when compiling my program foo and doing >>>>> >>>>> $ ldd foo >>>>> >>>>> between the linked libraries there appear: >>>>> libXNVCtrl.so.0 and libX11.so.6 >>>>> >>>>> the first one related to NVIDIA. I observed that this does not happen >>>>> when installing PETSc without hwloc. In this new version, PETSc requires to >>>>> install hwloc when trying to install pastix. In previous versions of PETSc >>>>> (eg 3.11.2) that wasn't necessary. >>>>> >>>>> I'm working in a cluster where I have no access to these X11-related >>>>> libraries and that's why I need them not be linked. Is it there some way to >>>>> disable X11 when installing hwloc? maybe enforcing some configuration >>>>> variables when installing it through petsc or installing it independently? >>>>> >>>> >>>> Can you send configure.log? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> thanks a lot! >>>>> >>>>> Below the configuration command of the two installations I've tried >>>>> with the 3.13.0 version. >>>>> >>>>> =================== WITH PASTIX =================== >>>>> >>>>> ./configure --with-make-np=20 >>>>> --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 >>>>> --doCleanup=0 \ >>>>> --with-mpi=1 \ >>>>> --with-valgrind=1 >>>>> --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ >>>>> --download-scalapack \ >>>>> --download-openblas \ >>>>> --download-mumps \ >>>>> --download-superlu_dist \ >>>>> --download-metis \ >>>>> --download-parmetis \ >>>>> --download-ptscotch \ >>>>> --download-hypre \ >>>>> >>>>> *--download-pastix \--download-hwloc \* >>>>> --with-64-bit-indices=1 \ >>>>> LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ >>>>> --with-cxx-dialect=C++11 \ >>>>> --with-x11=0 --with-x=0 --with-windows-graphics=0 \ >>>>> COPTFLAGS="-O3 -march=native -mtune=native" \ >>>>> CXXOPTFLAGS="-O3 -march=native -mtune=native" \ >>>>> FOPTFLAGS="-O3 -march=native -mtune=native" >>>>> >>>>> =================== WITHOUT PASTIX =================== >>>>> >>>>> the same as above but the options "--download-pastix --download-hwloc" >>>>> >>>>> ====================================================== >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaramillopalma at gmail.com Mon Apr 6 12:32:52 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 14:32:52 -0300 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: I understand, Matthew, thank you both! On Mon, Apr 6, 2020 at 2:29 PM Matthew Knepley wrote: > Okay, what it must be is that hwloc dynamically links to X11, so its never > given explicitly anywhere by us, but hwloc sucks it in. > Satish is right about the fix. > > Thanks, > > Matt > > On Mon, Apr 6, 2020 at 1:18 PM Alfredo Jaramillo < > ajaramillopalma at gmail.com> wrote: > >> attaching the file >> thanks! >> >> On Mon, Apr 6, 2020 at 2:12 PM Matthew Knepley wrote: >> >>> Hmm, there is no sign of libX11 in that log. Can you send me >>> >>> $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/petscvariables >>> >>> Thanks, >>> >>> Matt >>> >>> On Mon, Apr 6, 2020 at 1:02 PM Alfredo Jaramillo < >>> ajaramillopalma at gmail.com> wrote: >>> >>>> Sure, here it is. >>>> thx >>>> >>>> On Mon, Apr 6, 2020 at 1:59 PM Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Apr 6, 2020 at 12:55 PM Alfredo Jaramillo < >>>>> ajaramillopalma at gmail.com> wrote: >>>>> >>>>>> hello everyone, >>>>>> >>>>>> I have a fresh installation of the 3.13.0 version with pastix. Like >>>>>> with previous versions, I'm using the options >>>>>> >>>>>> --with-x11=0 --with-x=0 --with-windows-graphics=0 >>>>>> >>>>>> to disable X11 >>>>>> >>>>>> however, when compiling my program foo and doing >>>>>> >>>>>> $ ldd foo >>>>>> >>>>>> between the linked libraries there appear: >>>>>> libXNVCtrl.so.0 and libX11.so.6 >>>>>> >>>>>> the first one related to NVIDIA. I observed that this does not happen >>>>>> when installing PETSc without hwloc. In this new version, PETSc requires to >>>>>> install hwloc when trying to install pastix. In previous versions of PETSc >>>>>> (eg 3.11.2) that wasn't necessary. >>>>>> >>>>>> I'm working in a cluster where I have no access to these X11-related >>>>>> libraries and that's why I need them not be linked. Is it there some way to >>>>>> disable X11 when installing hwloc? maybe enforcing some configuration >>>>>> variables when installing it through petsc or installing it independently? >>>>>> >>>>> >>>>> Can you send configure.log? >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> thanks a lot! >>>>>> >>>>>> Below the configuration command of the two installations I've tried >>>>>> with the 3.13.0 version. >>>>>> >>>>>> =================== WITH PASTIX =================== >>>>>> >>>>>> ./configure --with-make-np=20 >>>>>> --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 >>>>>> --doCleanup=0 \ >>>>>> --with-mpi=1 \ >>>>>> --with-valgrind=1 >>>>>> --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ >>>>>> --download-scalapack \ >>>>>> --download-openblas \ >>>>>> --download-mumps \ >>>>>> --download-superlu_dist \ >>>>>> --download-metis \ >>>>>> --download-parmetis \ >>>>>> --download-ptscotch \ >>>>>> --download-hypre \ >>>>>> >>>>>> *--download-pastix \--download-hwloc \* >>>>>> --with-64-bit-indices=1 \ >>>>>> LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ >>>>>> --with-cxx-dialect=C++11 \ >>>>>> --with-x11=0 --with-x=0 --with-windows-graphics=0 \ >>>>>> COPTFLAGS="-O3 -march=native -mtune=native" \ >>>>>> CXXOPTFLAGS="-O3 -march=native -mtune=native" \ >>>>>> FOPTFLAGS="-O3 -march=native -mtune=native" >>>>>> >>>>>> =================== WITHOUT PASTIX =================== >>>>>> >>>>>> the same as above but the options "--download-pastix --download-hwloc" >>>>>> >>>>>> ====================================================== >>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From franco.dassi at unimib.it Mon Apr 6 12:44:56 2020 From: franco.dassi at unimib.it (Franco Dassi) Date: Mon, 6 Apr 2020 19:44:56 +0200 Subject: [petsc-users] Issue with the installation of Petsc In-Reply-To: References: Message-ID: Dear all, Thank you for your quick answer. Here you are the .log file franco Il giorno lun 6 apr 2020 alle ore 16:09 Satish Balay ha scritto: > Can you send configure.log from the first build attempt? > > Satish > > On Mon, 6 Apr 2020, Franco Dassi wrote: > > > Good morning, > > > > We are using your Petsc library to solve linear systems coming from PDEs. > > > > However recently we are trying to install PETSC (maint branch) on a linux > > system but we have some issues. > > > > On the one hand, when we use the following configure command > > > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran > > --download-mpich --download-fblaslapack --with-mpi=1 > > --download-superlu_dist --download-mumps --download-hypre > > --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' > > CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 > > -march=native -mtune=native' --download-scalapack > > > > > > I obtain the following error: > > > > > =============================================================================== > > Configuring PETSc to compile on your system > > > > > =============================================================================== > > TESTING: checkFortranCompiler from > > config.setCompilers(config/BuildSystem/config/setCompilers.py:989) > > > > > ******************************************************************************* > > OSError while running ./configure > > > ------------------------------------------------------------------------------- > > Cannot run executables created with FC. If this machine uses a batch > system > > to submit jobs you will need to configure using ./configure with the > > additional option --with-batch. > > Otherwise there is problem with the compilers. Can you compile and run > > code with your compiler > > '/home/gurst/repositories/petsc/arch-linux-c-opt/bin/mpif90'? > > > ******************************************************************************* > > > > > > On the other hand if we add the --with-batch the library is compiled, but > > when we try to use it to run the code we obtain the following error (at > run > > time): > > > > ./testGBDForMixed3d: symbol lookup error: > > > /home/martinelli/repositories/petsc/arch-linux-c-opt//lib/libmpifort.so.12: > > undefined symbol: MPIR_F_NeedInit > > > > > > Looking the symbols table in the petsc/arch-linux-c-opt/lib directory, > > It seems that something is undefined: > > > > ~/repositories/petsc/arch-linux-c-opt/lib $ nm -A *.so | grep -n > > MPIR_F_NeedInit > > > > 12591:libfmpich.so: U MPIR_F_NeedInit > > 18943:libmpi.so:000000000031ab00 D MPIR_F_NeedInit > > 27046:libmpich.so:000000000031ab00 D MPIR_F_NeedInit > > 34565:libmpichf90.so: U MPIR_F_NeedInit > > 40333:libmpifort.so: U MPIR_F_NeedInit > > 46685:libmpl.so:000000000031ab00 D MPIR_F_NeedInit > > 54788:libopa.so:000000000031ab00 D MPIR_F_NeedInit > > > > Are we doing something wrong? > > > > Thank you in advance for your time > > best > > Massimiliano and Franco > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 592278 bytes Desc: not available URL: From balay at mcs.anl.gov Mon Apr 6 12:57:16 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 12:57:16 -0500 (CDT) Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Great! Wrt pastix dependency on hwloc - config/BuildSystem/config/packages/PaStiX.py has the following comment: # PaStiX.py does not absolutely require hwloc, but it performs better with it and can fail (in ways not easily tested) without it # https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 # https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html I have a fix in branch balay/fix-hwloc-x-dependency/maint [that does not need the extra --download-hwloc-configure-arguments=--without-x option]. Can you give this a try? Satish On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > hello Satish, > adding > --download-hwloc-configure-arguments=--without-x > worked perfectly > > thank you! > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay wrote: > > > you can try: > > > > --download-pastix --download-hwloc > > --download-hwloc-configure-arguments=--without-x > > > > We should fix this to automatically use --with-x=0/1 > > > > Satish > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > hello everyone, > > > > > > I have a fresh installation of the 3.13.0 version with pastix. Like with > > > previous versions, I'm using the options > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > to disable X11 > > > > > > however, when compiling my program foo and doing > > > > > > $ ldd foo > > > > > > between the linked libraries there appear: > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > the first one related to NVIDIA. I observed that this does not happen > > when > > > installing PETSc without hwloc. In this new version, PETSc requires to > > > install hwloc when trying to install pastix. In previous versions of > > PETSc > > > (eg 3.11.2) that wasn't necessary. > > > > > > I'm working in a cluster where I have no access to these X11-related > > > libraries and that's why I need them not be linked. Is it there some way > > to > > > disable X11 when installing hwloc? maybe enforcing some configuration > > > variables when installing it through petsc or installing it > > independently? > > > > > > thanks a lot! > > > > > > Below the configuration command of the two installations I've tried with > > > the 3.13.0 version. > > > > > > =================== WITH PASTIX =================== > > > > > > ./configure --with-make-np=20 > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 --with-debugging=0 > > > --doCleanup=0 \ > > > --with-mpi=1 \ > > > --with-valgrind=1 --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > --download-scalapack \ > > > --download-openblas \ > > > --download-mumps \ > > > --download-superlu_dist \ > > > --download-metis \ > > > --download-parmetis \ > > > --download-ptscotch \ > > > --download-hypre \ > > > > > > *--download-pastix \--download-hwloc \* > > > --with-64-bit-indices=1 \ > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > --with-cxx-dialect=C++11 \ > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > =================== WITHOUT PASTIX =================== > > > > > > the same as above but the options "--download-pastix --download-hwloc" > > > > > > ====================================================== > > > > > > > > From knepley at gmail.com Mon Apr 6 12:58:19 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Apr 2020 13:58:19 -0400 Subject: [petsc-users] DMCreateSectionSF In-Reply-To: <4D249062-8B29-46B4-954B-E407235A5D28@pnnl.gov> References: <4D249062-8B29-46B4-954B-E407235A5D28@pnnl.gov> Message-ID: On Mon, Apr 6, 2020 at 12:13 PM Abhyankar, Shrirang G via petsc-users < petsc-users at mcs.anl.gov> wrote: > I am getting an error for DMCreateSectionSF() with latest petsc-master. I > see a DMCreateDefaultSF in petscdm.h. Has DMCreateSectionSF been renamed to > DMCreateDefaultSF? > No. What is the error? Thanks, Matt > > > Thanks, > > Shri > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexprescott at email.arizona.edu Mon Apr 6 13:04:42 2020 From: alexprescott at email.arizona.edu (Alexander B Prescott) Date: Mon, 6 Apr 2020 11:04:42 -0700 Subject: [petsc-users] Discontinuities in the Jacobian matrix for nonlinear problem Message-ID: Hello, The non-linear boundary-value problem I am applying PETSc to is a relatively simple steady-state flow routing algorithm based on the continuity equation, such that Div(Q) = 0 everywhere (Q=discharge). I use a finite volume approach to calculate flow between nodes, with Q calculated as a piecewise smooth function of the local flow depth and the water-surface slope. In 1D, the residual is calculated as R(x_i)=Q_i-1/2 - Q_i+1/2. For example, Q_i-1/2 at x[i]: Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), if x[i-1]+z[i-1] > x[i]+z[i] Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), if x[i]+z[i] > x[i-1]+z[i-1] Where z[i] is local topography and doesn't change over the iterations, and Q_i+1/2 is computed analogously. So the residual derivatives with respect to x[i-1], x[i] and x[i+1] are not continuous when the water-surface slope = 0. Are there intelligent ways to handle this problem? My 1D trial runs naively fix any zero-valued water-surface slopes to a small non-zero positive value (e.g. 1e-12). Solver convergence has been mixed and highly dependent on the initial guess. So far, FAS with QN coarse solver has been the most robust. Restricting x[i] to be non-negative is a separate issue, to which I have applied the SNES_VI solvers. They perform modestly but have been less robust. Best, Alexander -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 6 13:13:01 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 13:13:01 -0500 (CDT) Subject: [petsc-users] Issue with the installation of Petsc In-Reply-To: References: Message-ID: >>>>>>> Executing: /home/martinelli/repositories/petsc/arch-linux-c-opt/bin/mpif90 -o /tmp/petsc-nazwq29o/config.setCompilers/conftest /tmp/petsc-nazwq29o/config.setCompilers/conftest.o ERROR while running executable: Could not execute "['/tmp/petsc-nazwq29o/config.setCompilers/conftest']": /tmp/petsc-nazwq29o/config.setCompilers/conftest: symbol lookup error: /home/martinelli/repositories/petsc/arch-linux-c-opt//lib/libmpifort.so.12: undefined symbol: MPIR_F_NeedInit <<<<<< Ok - its the same issue that you've got when using --with-batch. > > > 18943:libmpi.so:000000000031ab00 D MPIR_F_NeedInit > > > 27046:libmpich.so:000000000031ab00 D MPIR_F_NeedInit The 'nm' output below does show that the symbol is in 'libmpich.so' and 'libmpi.so' What do you have for: /home/martinelli/repositories/petsc/arch-linux-c-opt/bin/mpif90 -show If libmpi.so is listed in the link like - then the compiler for some reason isn't resolving this symbol. You can try using --download-openmpi - or building mpich manually [without some of the additional optimization flags] - and see if that works. BTW: I tried this exact same build with 'gcc (GCC) 9.2.1 20190827 (Red Hat 9.2.1-1)' on 'Intel(R) Core(TM) i7-8700 CPU @ 3.20GH' and don't see this issue. Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack --with-mpi=1 --download-superlu_dist --download-mumps --download-hypre --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --download-scalapack Satish On Mon, 6 Apr 2020, Franco Dassi wrote: > Dear all, > > Thank you for your quick answer. Here you are the .log file > > franco > > Il giorno lun 6 apr 2020 alle ore 16:09 Satish Balay ha > scritto: > > > Can you send configure.log from the first build attempt? > > > > Satish > > > > On Mon, 6 Apr 2020, Franco Dassi wrote: > > > > > Good morning, > > > > > > We are using your Petsc library to solve linear systems coming from PDEs. > > > > > > However recently we are trying to install PETSC (maint branch) on a linux > > > system but we have some issues. > > > > > > On the one hand, when we use the following configure command > > > > > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran > > > --download-mpich --download-fblaslapack --with-mpi=1 > > > --download-superlu_dist --download-mumps --download-hypre > > > --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' > > > CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 > > > -march=native -mtune=native' --download-scalapack > > > > > > > > > I obtain the following error: > > > > > > > > =============================================================================== > > > Configuring PETSc to compile on your system > > > > > > > > =============================================================================== > > > TESTING: checkFortranCompiler from > > > config.setCompilers(config/BuildSystem/config/setCompilers.py:989) > > > > > > > > ******************************************************************************* > > > OSError while running ./configure > > > > > ------------------------------------------------------------------------------- > > > Cannot run executables created with FC. If this machine uses a batch > > system > > > to submit jobs you will need to configure using ./configure with the > > > additional option --with-batch. > > > Otherwise there is problem with the compilers. Can you compile and run > > > code with your compiler > > > '/home/gurst/repositories/petsc/arch-linux-c-opt/bin/mpif90'? > > > > > ******************************************************************************* > > > > > > > > > On the other hand if we add the --with-batch the library is compiled, but > > > when we try to use it to run the code we obtain the following error (at > > run > > > time): > > > > > > ./testGBDForMixed3d: symbol lookup error: > > > > > /home/martinelli/repositories/petsc/arch-linux-c-opt//lib/libmpifort.so.12: > > > undefined symbol: MPIR_F_NeedInit > > > > > > > > > Looking the symbols table in the petsc/arch-linux-c-opt/lib directory, > > > It seems that something is undefined: > > > > > > ~/repositories/petsc/arch-linux-c-opt/lib $ nm -A *.so | grep -n > > > MPIR_F_NeedInit > > > > > > 12591:libfmpich.so: U MPIR_F_NeedInit > > > 18943:libmpi.so:000000000031ab00 D MPIR_F_NeedInit > > > 27046:libmpich.so:000000000031ab00 D MPIR_F_NeedInit > > > 34565:libmpichf90.so: U MPIR_F_NeedInit > > > 40333:libmpifort.so: U MPIR_F_NeedInit > > > 46685:libmpl.so:000000000031ab00 D MPIR_F_NeedInit > > > 54788:libopa.so:000000000031ab00 D MPIR_F_NeedInit > > > > > > Are we doing something wrong? > > > > > > Thank you in advance for your time > > > best > > > Massimiliano and Franco > > > > > > > > From shrirang.abhyankar at pnnl.gov Mon Apr 6 13:14:39 2020 From: shrirang.abhyankar at pnnl.gov (Abhyankar, Shrirang G) Date: Mon, 6 Apr 2020 18:14:39 +0000 Subject: [petsc-users] DMCreateSectionSF In-Reply-To: References: <4D249062-8B29-46B4-954B-E407235A5D28@pnnl.gov> Message-ID: Sorry, my bad. I had a stale branch. Thanks, Shri From: Matthew Knepley Date: Monday, April 6, 2020 at 12:59 PM To: "Abhyankar, Shrirang G" Cc: PETSc Users Subject: Re: [petsc-users] DMCreateSectionSF On Mon, Apr 6, 2020 at 12:13 PM Abhyankar, Shrirang G via petsc-users > wrote: I am getting an error for DMCreateSectionSF() with latest petsc-master. I see a DMCreateDefaultSF in petscdm.h. Has DMCreateSectionSF been renamed to DMCreateDefaultSF? No. What is the error? Thanks, Matt Thanks, Shri -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaramillopalma at gmail.com Mon Apr 6 13:23:43 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 15:23:43 -0300 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Hello Satish, Im sorry but I think I tested this workaround with the wrong installation, my bad. In fact the libXNVC, X11 libraries are still being linked even with the --download-hwloc-configure-arguments=--without-x option Im attaching the config.log file located in $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 you can see that the option --without-x is there (line 7) but by means of $ ldd foo I see that the links are still there On Mon, Apr 6, 2020 at 2:57 PM Satish Balay wrote: > Great! > > Wrt pastix dependency on hwloc - > config/BuildSystem/config/packages/PaStiX.py has the following comment: > > # PaStiX.py does not absolutely require hwloc, but it performs better > with it and can fail (in ways not easily tested) without it > # > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > # https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that does not > need the extra --download-hwloc-configure-arguments=--without-x option]. > Can you give this a try? > > Satish > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > hello Satish, > > adding > > --download-hwloc-configure-arguments=--without-x > > worked perfectly > > > > thank you! > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay wrote: > > > > > you can try: > > > > > > --download-pastix --download-hwloc > > > --download-hwloc-configure-arguments=--without-x > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > Satish > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > hello everyone, > > > > > > > > I have a fresh installation of the 3.13.0 version with pastix. Like > with > > > > previous versions, I'm using the options > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > to disable X11 > > > > > > > > however, when compiling my program foo and doing > > > > > > > > $ ldd foo > > > > > > > > between the linked libraries there appear: > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > the first one related to NVIDIA. I observed that this does not happen > > > when > > > > installing PETSc without hwloc. In this new version, PETSc requires > to > > > > install hwloc when trying to install pastix. In previous versions of > > > PETSc > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > I'm working in a cluster where I have no access to these X11-related > > > > libraries and that's why I need them not be linked. Is it there some > way > > > to > > > > disable X11 when installing hwloc? maybe enforcing some configuration > > > > variables when installing it through petsc or installing it > > > independently? > > > > > > > > thanks a lot! > > > > > > > > Below the configuration command of the two installations I've tried > with > > > > the 3.13.0 version. > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > ./configure --with-make-np=20 > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > --with-debugging=0 > > > > --doCleanup=0 \ > > > > --with-mpi=1 \ > > > > --with-valgrind=1 > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > --download-scalapack \ > > > > --download-openblas \ > > > > --download-mumps \ > > > > --download-superlu_dist \ > > > > --download-metis \ > > > > --download-parmetis \ > > > > --download-ptscotch \ > > > > --download-hypre \ > > > > > > > > *--download-pastix \--download-hwloc \* > > > > --with-64-bit-indices=1 \ > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > --with-cxx-dialect=C++11 \ > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > the same as above but the options "--download-pastix > --download-hwloc" > > > > > > > > ====================================================== > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: config.log Type: text/x-log Size: 439191 bytes Desc: not available URL: From knepley at gmail.com Mon Apr 6 13:31:48 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Apr 2020 14:31:48 -0400 Subject: [petsc-users] Discontinuities in the Jacobian matrix for nonlinear problem In-Reply-To: References: Message-ID: On Mon, Apr 6, 2020 at 2:06 PM Alexander B Prescott < alexprescott at email.arizona.edu> wrote: > Hello, > > The non-linear boundary-value problem I am applying PETSc to is a > relatively simple steady-state flow routing algorithm based on the > continuity equation, such that Div(Q) = 0 everywhere (Q=discharge). I use a > finite volume approach to calculate flow between nodes, with Q calculated > as a piecewise smooth function of the local flow depth and the > water-surface slope. In 1D, the residual is calculated as R(x_i)=Q_i-1/2 - > Q_i+1/2. > For example, Q_i-1/2 at x[i]: > > Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), > if x[i-1]+z[i-1] > x[i]+z[i] > Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), > if x[i]+z[i] > x[i-1]+z[i-1] > > > Where z[i] is local topography and doesn't change over the iterations, and > Q_i+1/2 is computed analogously. So the residual derivatives with respect > to x[i-1], x[i] and x[i+1] are not continuous when the water-surface slope > = 0. > > Are there intelligent ways to handle this problem? My 1D trial runs > naively fix any zero-valued water-surface slopes to a small non-zero > positive value (e.g. 1e-12). Solver convergence has been mixed and highly > dependent on the initial guess. So far, FAS with QN coarse solver has been > the most robust. > > Restricting x[i] to be non-negative is a separate issue, to which I have > applied the SNES_VI solvers. They perform modestly but have been less > robust. > My understanding is that this is a shortcoming of the model, not the solver. However, I am Cc'ing Nathan since he knows about these models. Thanks, Matt > Best, > Alexander > > > > -- > Alexander Prescott > alexprescott at email.arizona.edu > PhD Candidate, The University of Arizona > Department of Geosciences > 1040 E. 4th Street > Tucson, AZ, 85721 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 6 13:53:32 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 13:53:32 -0500 (CDT) Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Looks like the option is --with-x=no Can you give this a try? Branch balay/fix-hwloc-x-dependency/maint has this update now Satish On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > Hello Satish, Im sorry but I think I tested this workaround with the wrong > installation, my bad. > In fact the libXNVC, X11 libraries are still being linked even with the > --download-hwloc-configure-arguments=--without-x option > > Im attaching the config.log file located in > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > you can see that the option --without-x is there (line 7) but by means of > $ ldd foo > I see that the links are still there > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay wrote: > > > Great! > > > > Wrt pastix dependency on hwloc - > > config/BuildSystem/config/packages/PaStiX.py has the following comment: > > > > # PaStiX.py does not absolutely require hwloc, but it performs better > > with it and can fail (in ways not easily tested) without it > > # > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > # https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that does not > > need the extra --download-hwloc-configure-arguments=--without-x option]. > > Can you give this a try? > > > > Satish > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > hello Satish, > > > adding > > > --download-hwloc-configure-arguments=--without-x > > > worked perfectly > > > > > > thank you! > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay wrote: > > > > > > > you can try: > > > > > > > > --download-pastix --download-hwloc > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > Satish > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > hello everyone, > > > > > > > > > > I have a fresh installation of the 3.13.0 version with pastix. Like > > with > > > > > previous versions, I'm using the options > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > to disable X11 > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > $ ldd foo > > > > > > > > > > between the linked libraries there appear: > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > the first one related to NVIDIA. I observed that this does not happen > > > > when > > > > > installing PETSc without hwloc. In this new version, PETSc requires > > to > > > > > install hwloc when trying to install pastix. In previous versions of > > > > PETSc > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > I'm working in a cluster where I have no access to these X11-related > > > > > libraries and that's why I need them not be linked. Is it there some > > way > > > > to > > > > > disable X11 when installing hwloc? maybe enforcing some configuration > > > > > variables when installing it through petsc or installing it > > > > independently? > > > > > > > > > > thanks a lot! > > > > > > > > > > Below the configuration command of the two installations I've tried > > with > > > > > the 3.13.0 version. > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > ./configure --with-make-np=20 > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > --with-debugging=0 > > > > > --doCleanup=0 \ > > > > > --with-mpi=1 \ > > > > > --with-valgrind=1 > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > --download-scalapack \ > > > > > --download-openblas \ > > > > > --download-mumps \ > > > > > --download-superlu_dist \ > > > > > --download-metis \ > > > > > --download-parmetis \ > > > > > --download-ptscotch \ > > > > > --download-hypre \ > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > --with-64-bit-indices=1 \ > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > --with-cxx-dialect=C++11 \ > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > > > the same as above but the options "--download-pastix > > --download-hwloc" > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > From lzou at anl.gov Mon Apr 6 13:55:57 2020 From: lzou at anl.gov (Zou, Ling) Date: Mon, 6 Apr 2020 18:55:57 +0000 Subject: [petsc-users] Discontinuities in the Jacobian matrix for nonlinear problem In-Reply-To: References: Message-ID: What about ?bending? the physics a bit? Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), if x[i-1]+z[i-1] > x[i]+z[i] + eps Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), if x[i]+z[i] > x[i-1]+z[i-1] + eps Q_i-1/2 proportional to x[i-1] + z[i-1] - (x[i] + z[i]) in between in which, eps is a very small positive number. -Ling From: petsc-users on behalf of Alexander B Prescott Date: Monday, April 6, 2020 at 1:06 PM To: PETSc Subject: [petsc-users] Discontinuities in the Jacobian matrix for nonlinear problem Hello, The non-linear boundary-value problem I am applying PETSc to is a relatively simple steady-state flow routing algorithm based on the continuity equation, such that Div(Q) = 0 everywhere (Q=discharge). I use a finite volume approach to calculate flow between nodes, with Q calculated as a piecewise smooth function of the local flow depth and the water-surface slope. In 1D, the residual is calculated as R(x_i)=Q_i-1/2 - Q_i+1/2. For example, Q_i-1/2 at x[i]: Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), if x[i-1]+z[i-1] > x[i]+z[i] Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), if x[i]+z[i] > x[i-1]+z[i-1] Where z[i] is local topography and doesn't change over the iterations, and Q_i+1/2 is computed analogously. So the residual derivatives with respect to x[i-1], x[i] and x[i+1] are not continuous when the water-surface slope = 0. Are there intelligent ways to handle this problem? My 1D trial runs naively fix any zero-valued water-surface slopes to a small non-zero positive value (e.g. 1e-12). Solver convergence has been mixed and highly dependent on the initial guess. So far, FAS with QN coarse solver has been the most robust. Restricting x[i] to be non-negative is a separate issue, to which I have applied the SNES_VI solvers. They perform modestly but have been less robust. Best, Alexander -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Apr 6 14:00:45 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 6 Apr 2020 15:00:45 -0400 Subject: [petsc-users] strange TS adaptivity behavior Message-ID: I have a problem that is fairly smooth and when it decreases the time step it just keeps decreasing almost forever. I wanted to see if anyone has any clue what is going on here. This test reaches a quasi equilibrium and the adaptivity uses fairly large time steps with dU/dt - F(u,t) = 0. I set a max time step of 10. A source starts at one point (dU/dt - F(u,t) = S(t)). It cools pretty fast. THe truncation error increase a lot when the source starts but it does not decrease the time step immediately because I had a mat time step set: .... Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 4 [0] TSAdaptChoose_Basic(): Estimated scaled local *truncation error 0.000214757*, accepting step of size 10. TSAdapt basic arkimex 0:1bee step *386* accepted t=2292.68 + 1.000e+01 dt=1.000e+01 wlte=0.000215 wltea= -1 wlter= -1 ... [0] TSAdaptChoose_Basic(): Estimated scaled local *truncation error 0.500399,* accepting step of size 10. TSAdapt basic arkimex 0:1bee step *387* accepted t=2302.68 + 1.000e+01 dt=1.000e+01 wlte= 0.5 wltea= -1 wlter= -1 388) species-0: charge density= -1.6422881272166e+01 z-momentum= 7.2180929658011e-03 energy= 9.7881753794625e+05 It goes for another few time steps before it rejects the time step for the first time, and then it just keeps reducing and reducing: [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 0.939966, accepting step of size 10. TSAdapt basic arkimex 0:1bee step *395* accepted t=2382.68 + 1.000e+01 dt=9.283e+00 wlte= 0.94 wltea= -1 wlter= -1 .... [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.33533, *rejecting* step of size 9.28296 TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + 9.283e+00 dt=7.230e+00 wlte= 1.34 wltea= -1 wlter= -1 [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.44054, *rejecting* step of size 7.22994 TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + 7.230e+00 dt=2.711e+00 wlte= 1.44 wltea= -1 wlter= -1 [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.63529, *rejecting* step of size 2.71072 TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + 2.711e+00 dt=9.539e-01 wlte= 1.64 wltea= -1 wlter= -1 [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.60415, *rejecting* step of size 0.953894 TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + 9.539e-01 dt=5.000e-01 wlte= 1.6 wltea= -1 wlter= -1 [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.49714, *accepting because step size 0.5 is at minimum* TSAdapt basic arkimex 0:1bee step *396* accepted t=2392.68 + 5.000e-01 dt=5.000e-01 wlte= 1.5 wltea= -1 wlter= -1 Anyone have any idea what is going on here? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaramillopalma at gmail.com Mon Apr 6 14:27:52 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 16:27:52 -0300 Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: No, it doesn't work... I also tried --with-x=disabled. I gave a look to ./configure --help in the hwloc directory and to the configure file itself, but Im not finding the right option. On Mon, Apr 6, 2020 at 3:53 PM Satish Balay wrote: > Looks like the option is --with-x=no > > Can you give this a try? Branch balay/fix-hwloc-x-dependency/maint has > this update now > > Satish > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > Hello Satish, Im sorry but I think I tested this workaround with the > wrong > > installation, my bad. > > In fact the libXNVC, X11 libraries are still being linked even with the > > --download-hwloc-configure-arguments=--without-x option > > > > Im attaching the config.log file located in > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > you can see that the option --without-x is there (line 7) but by means of > > $ ldd foo > > I see that the links are still there > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay wrote: > > > > > Great! > > > > > > Wrt pastix dependency on hwloc - > > > config/BuildSystem/config/packages/PaStiX.py has the following comment: > > > > > > # PaStiX.py does not absolutely require hwloc, but it performs > better > > > with it and can fail (in ways not easily tested) without it > > > # > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > # https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that does > not > > > need the extra --download-hwloc-configure-arguments=--without-x > option]. > > > Can you give this a try? > > > > > > Satish > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > hello Satish, > > > > adding > > > > --download-hwloc-configure-arguments=--without-x > > > > worked perfectly > > > > > > > > thank you! > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay > wrote: > > > > > > > > > you can try: > > > > > > > > > > --download-pastix --download-hwloc > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > Satish > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > hello everyone, > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version with pastix. > Like > > > with > > > > > > previous versions, I'm using the options > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > between the linked libraries there appear: > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > the first one related to NVIDIA. I observed that this does not > happen > > > > > when > > > > > > installing PETSc without hwloc. In this new version, PETSc > requires > > > to > > > > > > install hwloc when trying to install pastix. In previous > versions of > > > > > PETSc > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > I'm working in a cluster where I have no access to these > X11-related > > > > > > libraries and that's why I need them not be linked. Is it there > some > > > way > > > > > to > > > > > > disable X11 when installing hwloc? maybe enforcing some > configuration > > > > > > variables when installing it through petsc or installing it > > > > > independently? > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > Below the configuration command of the two installations I've > tried > > > with > > > > > > the 3.13.0 version. > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > --with-debugging=0 > > > > > > --doCleanup=0 \ > > > > > > --with-mpi=1 \ > > > > > > --with-valgrind=1 > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > --download-scalapack \ > > > > > > --download-openblas \ > > > > > > --download-mumps \ > > > > > > --download-superlu_dist \ > > > > > > --download-metis \ > > > > > > --download-parmetis \ > > > > > > --download-ptscotch \ > > > > > > --download-hypre \ > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > --with-64-bit-indices=1 \ > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > --with-cxx-dialect=C++11 \ > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > > > > > the same as above but the options "--download-pastix > > > --download-hwloc" > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 6 14:41:45 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 14:41:45 -0500 (CDT) Subject: [petsc-users] deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Hm - its working for me on CentOS7 [I see you are using RHEL7/CentOS7] Can you try the branch balay/fix-hwloc-x-dependency/maint - and see what you get? [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 --download-hwloc --with-x=0 [balay at petsc-c7 petsc:d409bfc]$ ldd arch-linux2-c-debug/lib/libhwloc.so linux-vdso.so.1 => (0x00007ffe88bae000) libm.so.6 => /lib64/libm.so.6 (0x00007ff75c7b9000) libc.so.6 => /lib64/libc.so.6 (0x00007ff75c3eb000) /lib64/ld-linux-x86-64.so.2 (0x00007ff75cd1a000) [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 --download-hwloc --with-x=1 [balay at petsc-c7 petsc:d409bfc]$ ldd arch-linux2-c-debug/lib/libhwloc.so linux-vdso.so.1 => (0x00007ffe129cc000) libm.so.6 => /lib64/libm.so.6 (0x00007f89920fa000) libX11.so.6 => /lib64/libX11.so.6 (0x00007f8991dbc000) libc.so.6 => /lib64/libc.so.6 (0x00007f89919ee000) /lib64/ld-linux-x86-64.so.2 (0x00007f899265b000) libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f89917c6000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f89915c2000) libXau.so.6 => /lib64/libXau.so.6 (0x00007f89913be000) [balay at petsc-c7 petsc:d409bfc]$ Satish On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > No, it doesn't work... I also tried --with-x=disabled. I gave a look to > ./configure --help in the hwloc directory and to the configure file itself, > but Im not finding the right option. > > On Mon, Apr 6, 2020 at 3:53 PM Satish Balay wrote: > > > Looks like the option is --with-x=no > > > > Can you give this a try? Branch balay/fix-hwloc-x-dependency/maint has > > this update now > > > > Satish > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > Hello Satish, Im sorry but I think I tested this workaround with the > > wrong > > > installation, my bad. > > > In fact the libXNVC, X11 libraries are still being linked even with the > > > --download-hwloc-configure-arguments=--without-x option > > > > > > Im attaching the config.log file located in > > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > > you can see that the option --without-x is there (line 7) but by means of > > > $ ldd foo > > > I see that the links are still there > > > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay wrote: > > > > > > > Great! > > > > > > > > Wrt pastix dependency on hwloc - > > > > config/BuildSystem/config/packages/PaStiX.py has the following comment: > > > > > > > > # PaStiX.py does not absolutely require hwloc, but it performs > > better > > > > with it and can fail (in ways not easily tested) without it > > > > # > > > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > > # https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that does > > not > > > > need the extra --download-hwloc-configure-arguments=--without-x > > option]. > > > > Can you give this a try? > > > > > > > > Satish > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > hello Satish, > > > > > adding > > > > > --download-hwloc-configure-arguments=--without-x > > > > > worked perfectly > > > > > > > > > > thank you! > > > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay > > wrote: > > > > > > > > > > > you can try: > > > > > > > > > > > > --download-pastix --download-hwloc > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > > > Satish > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > hello everyone, > > > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version with pastix. > > Like > > > > with > > > > > > > previous versions, I'm using the options > > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > > > between the linked libraries there appear: > > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > > > the first one related to NVIDIA. I observed that this does not > > happen > > > > > > when > > > > > > > installing PETSc without hwloc. In this new version, PETSc > > requires > > > > to > > > > > > > install hwloc when trying to install pastix. In previous > > versions of > > > > > > PETSc > > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > > > I'm working in a cluster where I have no access to these > > X11-related > > > > > > > libraries and that's why I need them not be linked. Is it there > > some > > > > way > > > > > > to > > > > > > > disable X11 when installing hwloc? maybe enforcing some > > configuration > > > > > > > variables when installing it through petsc or installing it > > > > > > independently? > > > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > > > Below the configuration command of the two installations I've > > tried > > > > with > > > > > > > the 3.13.0 version. > > > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > > --with-debugging=0 > > > > > > > --doCleanup=0 \ > > > > > > > --with-mpi=1 \ > > > > > > > --with-valgrind=1 > > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > > --download-scalapack \ > > > > > > > --download-openblas \ > > > > > > > --download-mumps \ > > > > > > > --download-superlu_dist \ > > > > > > > --download-metis \ > > > > > > > --download-parmetis \ > > > > > > > --download-ptscotch \ > > > > > > > --download-hypre \ > > > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > > --with-64-bit-indices=1 \ > > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > > --with-cxx-dialect=C++11 \ > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > > > > > > > the same as above but the options "--download-pastix > > > > --download-hwloc" > > > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From alexprescott at email.arizona.edu Mon Apr 6 14:42:17 2020 From: alexprescott at email.arizona.edu (Alexander B Prescott) Date: Mon, 6 Apr 2020 12:42:17 -0700 Subject: [petsc-users] [EXT]Re: Discontinuities in the Jacobian matrix for nonlinear problem In-Reply-To: References: Message-ID: Thank you Ling, that's an interesting idea. Bending the physics like this would be okay for my purposes, I'll give it a look. Best, Alexander On Mon, Apr 6, 2020 at 11:56 AM Zou, Ling wrote: > *External Email* > > What about ?bending? the physics a bit? > > > > Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), > if x[i-1]+z[i-1] > x[i]+z[i] + eps > > Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), > if x[i]+z[i] > x[i-1]+z[i-1] + eps > > Q_i-1/2 proportional to x[i-1] + z[i-1] - (x[i] + > z[i]) in between > > > > in which, eps is a very small positive number. > > > > -Ling > > > > > > *From: *petsc-users on behalf of > Alexander B Prescott > *Date: *Monday, April 6, 2020 at 1:06 PM > *To: *PETSc > *Subject: *[petsc-users] Discontinuities in the Jacobian matrix for > nonlinear problem > > > > Hello, > > > > The non-linear boundary-value problem I am applying PETSc to is a > relatively simple steady-state flow routing algorithm based on the > continuity equation, such that Div(Q) = 0 everywhere (Q=discharge). I use a > finite volume approach to calculate flow between nodes, with Q calculated > as a piecewise smooth function of the local flow depth and the > water-surface slope. In 1D, the residual is calculated as R(x_i)=Q_i-1/2 - > Q_i+1/2. > > For example, Q_i-1/2 at x[i]: > > Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), > if x[i-1]+z[i-1] > x[i]+z[i] > > Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), > if x[i]+z[i] > x[i-1]+z[i-1] > > > > Where z[i] is local topography and doesn't change over the iterations, and > Q_i+1/2 is computed analogously. So the residual derivatives with respect > to x[i-1], x[i] and x[i+1] are not continuous when the water-surface slope > = 0. > > > > Are there intelligent ways to handle this problem? My 1D trial runs > naively fix any zero-valued water-surface slopes to a small non-zero > positive value (e.g. 1e-12). Solver convergence has been mixed and highly > dependent on the initial guess. So far, FAS with QN coarse solver has been > the most robust. > > > > Restricting x[i] to be non-negative is a separate issue, to which I have > applied the SNES_VI solvers. They perform modestly but have been less > robust. > > > > Best, > > Alexander > > > > > > > -- > > Alexander Prescott > > alexprescott at email.arizona.edu > > PhD Candidate, The University of Arizona > > Department of Geosciences > > 1040 E. 4th Street > > Tucson, AZ, 85721 > -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nathaniel.collier at gmail.com Mon Apr 6 14:44:39 2020 From: nathaniel.collier at gmail.com (Nathan Collier) Date: Mon, 6 Apr 2020 15:44:39 -0400 Subject: [petsc-users] Discontinuities in the Jacobian matrix for nonlinear problem In-Reply-To: References: Message-ID: Alexander, I am not familiar with your specific model, but I do have experience working with the diffusive / kinematic wave approximation to shallow water equations. I always had a lot of trouble near ponding conditions (steady state), which is odd because when nature says do nothing, you don't expect the equations to be hard to solve. Some kind soul pointed out to me that this was because the equations are derived using a power law model relating flow to slope/depth which breaks down when you should have no flow. So as long as you have flow, the solver behaves well and things are fine (i.e. smooth terrain, academic test problems), but when you have local ponding due to real terrain effects, you are hosed. A better solution approach doesn't help if your equations aren't meant for the problem you are trying to solve. I think 'bending' the physics is the right idea, but I never had much luck myself with this. Nate On Mon, Apr 6, 2020 at 2:56 PM Zou, Ling via petsc-users < petsc-users at mcs.anl.gov> wrote: > What about ?bending? the physics a bit? > > > > Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), > if x[i-1]+z[i-1] > x[i]+z[i] + eps > > Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), > if x[i]+z[i] > x[i-1]+z[i-1] + eps > > Q_i-1/2 proportional to x[i-1] + z[i-1] - (x[i] + > z[i]) in between > > > > in which, eps is a very small positive number. > > > > -Ling > > > > > > *From: *petsc-users on behalf of > Alexander B Prescott > *Date: *Monday, April 6, 2020 at 1:06 PM > *To: *PETSc > *Subject: *[petsc-users] Discontinuities in the Jacobian matrix for > nonlinear problem > > > > Hello, > > > > The non-linear boundary-value problem I am applying PETSc to is a > relatively simple steady-state flow routing algorithm based on the > continuity equation, such that Div(Q) = 0 everywhere (Q=discharge). I use a > finite volume approach to calculate flow between nodes, with Q calculated > as a piecewise smooth function of the local flow depth and the > water-surface slope. In 1D, the residual is calculated as R(x_i)=Q_i-1/2 - > Q_i+1/2. > > For example, Q_i-1/2 at x[i]: > > Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), > if x[i-1]+z[i-1] > x[i]+z[i] > > Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), > if x[i]+z[i] > x[i-1]+z[i-1] > > > > Where z[i] is local topography and doesn't change over the iterations, and > Q_i+1/2 is computed analogously. So the residual derivatives with respect > to x[i-1], x[i] and x[i+1] are not continuous when the water-surface slope > = 0. > > > > Are there intelligent ways to handle this problem? My 1D trial runs > naively fix any zero-valued water-surface slopes to a small non-zero > positive value (e.g. 1e-12). Solver convergence has been mixed and highly > dependent on the initial guess. So far, FAS with QN coarse solver has been > the most robust. > > > > Restricting x[i] to be non-negative is a separate issue, to which I have > applied the SNES_VI solvers. They perform modestly but have been less > robust. > > > > Best, > > Alexander > > > > > > > -- > > Alexander Prescott > > alexprescott at email.arizona.edu > > PhD Candidate, The University of Arizona > > Department of Geosciences > > 1040 E. 4th Street > > Tucson, AZ, 85721 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Apr 6 14:48:01 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 06 Apr 2020 13:48:01 -0600 Subject: [petsc-users] strange TS adaptivity behavior In-Reply-To: References: Message-ID: <87a73o5rbi.fsf@jedbrown.org> This typically happens when your model is discontinuous or you activate some fast transitien. You have a really aggressive lower bound on your time step so its hard to tell here. Mark Adams writes: > I have a problem that is fairly smooth and when it decreases the time step > it just keeps decreasing almost forever. I wanted to see if anyone has any > clue what is going on here. > > This test reaches a quasi equilibrium and the adaptivity uses fairly large > time steps with dU/dt - F(u,t) = 0. I set a max time step of 10. A source > starts at one point (dU/dt - F(u,t) = S(t)). It cools pretty fast. > > THe truncation error increase a lot when the source starts but it does not > decrease the time step immediately because I had a mat time step set: > .... > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 4 > [0] TSAdaptChoose_Basic(): Estimated scaled local *truncation error > 0.000214757*, accepting step of size 10. > TSAdapt basic arkimex 0:1bee step *386* accepted t=2292.68 + > 1.000e+01 dt=1.000e+01 wlte=0.000215 wltea= -1 wlter= -1 > ... > [0] TSAdaptChoose_Basic(): Estimated scaled local *truncation error > 0.500399,* accepting step of size 10. > TSAdapt basic arkimex 0:1bee step *387* accepted t=2302.68 + > 1.000e+01 dt=1.000e+01 wlte= 0.5 wltea= -1 wlter= -1 > 388) species-0: charge density= -1.6422881272166e+01 z-momentum= > 7.2180929658011e-03 energy= 9.7881753794625e+05 > > It goes for another few time steps before it rejects the time step for the > first time, and then it just keeps reducing and reducing: > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 0.939966, accepting step of size 10. > TSAdapt basic arkimex 0:1bee step *395* accepted t=2382.68 + > 1.000e+01 dt=9.283e+00 wlte= 0.94 wltea= -1 wlter= -1 > .... > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.33533, > *rejecting* step of size 9.28296 > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > 9.283e+00 dt=7.230e+00 wlte= 1.34 wltea= -1 wlter= -1 > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.44054, > *rejecting* step of size 7.22994 > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > 7.230e+00 dt=2.711e+00 wlte= 1.44 wltea= -1 wlter= -1 > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.63529, > *rejecting* step of size 2.71072 > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > 2.711e+00 dt=9.539e-01 wlte= 1.64 wltea= -1 wlter= -1 > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 1.60415, > *rejecting* step of size 0.953894 > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > 9.539e-01 dt=5.000e-01 wlte= 1.6 wltea= -1 wlter= -1 > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 1.49714, *accepting > because step size 0.5 is at minimum* > TSAdapt basic arkimex 0:1bee step *396* accepted t=2392.68 + > 5.000e-01 dt=5.000e-01 wlte= 1.5 wltea= -1 wlter= -1 > > Anyone have any idea what is going on here? > > Thanks, > Mark From ajaramillopalma at gmail.com Mon Apr 6 15:55:41 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 17:55:41 -0300 Subject: [petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]Re: deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: I've tried two installations in that branch, a "full installation" with every package I need, and a minimal one. I attach the results here, these libraries are still being linked On Mon, Apr 6, 2020 at 4:41 PM Satish Balay wrote: > Hm - its working for me on CentOS7 [I see you are using RHEL7/CentOS7] > > > Can you try the branch balay/fix-hwloc-x-dependency/maint - and see what > you get? > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 --download-hwloc > --with-x=0 > > [balay at petsc-c7 petsc:d409bfc]$ ldd arch-linux2-c-debug/lib/libhwloc.so > > > linux-vdso.so.1 => (0x00007ffe88bae000) > libm.so.6 => /lib64/libm.so.6 (0x00007ff75c7b9000) > libc.so.6 => /lib64/libc.so.6 (0x00007ff75c3eb000) > /lib64/ld-linux-x86-64.so.2 (0x00007ff75cd1a000) > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 --download-hwloc > --with-x=1 > > [balay at petsc-c7 petsc:d409bfc]$ ldd arch-linux2-c-debug/lib/libhwloc.so > > > linux-vdso.so.1 => (0x00007ffe129cc000) > libm.so.6 => /lib64/libm.so.6 (0x00007f89920fa000) > libX11.so.6 => /lib64/libX11.so.6 (0x00007f8991dbc000) > libc.so.6 => /lib64/libc.so.6 (0x00007f89919ee000) > /lib64/ld-linux-x86-64.so.2 (0x00007f899265b000) > libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f89917c6000) > libdl.so.2 => /lib64/libdl.so.2 (0x00007f89915c2000) > libXau.so.6 => /lib64/libXau.so.6 (0x00007f89913be000) > [balay at petsc-c7 petsc:d409bfc]$ > > Satish > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > No, it doesn't work... I also tried --with-x=disabled. I gave a look to > > ./configure --help in the hwloc directory and to the configure file > itself, > > but Im not finding the right option. > > > > On Mon, Apr 6, 2020 at 3:53 PM Satish Balay wrote: > > > > > Looks like the option is --with-x=no > > > > > > Can you give this a try? Branch balay/fix-hwloc-x-dependency/maint has > > > this update now > > > > > > Satish > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > Hello Satish, Im sorry but I think I tested this workaround with the > > > wrong > > > > installation, my bad. > > > > In fact the libXNVC, X11 libraries are still being linked even with > the > > > > --download-hwloc-configure-arguments=--without-x option > > > > > > > > Im attaching the config.log file located in > > > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > > > you can see that the option --without-x is there (line 7) but by > means of > > > > $ ldd foo > > > > I see that the links are still there > > > > > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay > wrote: > > > > > > > > > Great! > > > > > > > > > > Wrt pastix dependency on hwloc - > > > > > config/BuildSystem/config/packages/PaStiX.py has the following > comment: > > > > > > > > > > # PaStiX.py does not absolutely require hwloc, but it performs > > > better > > > > > with it and can fail (in ways not easily tested) without it > > > > > # > > > > > > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > > > # > https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that > does > > > not > > > > > need the extra --download-hwloc-configure-arguments=--without-x > > > option]. > > > > > Can you give this a try? > > > > > > > > > > Satish > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > hello Satish, > > > > > > adding > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > worked perfectly > > > > > > > > > > > > thank you! > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay > > > wrote: > > > > > > > > > > > > > you can try: > > > > > > > > > > > > > > --download-pastix --download-hwloc > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > hello everyone, > > > > > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version with > pastix. > > > Like > > > > > with > > > > > > > > previous versions, I'm using the options > > > > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > > > > > between the linked libraries there appear: > > > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > > > > > the first one related to NVIDIA. I observed that this does > not > > > happen > > > > > > > when > > > > > > > > installing PETSc without hwloc. In this new version, PETSc > > > requires > > > > > to > > > > > > > > install hwloc when trying to install pastix. In previous > > > versions of > > > > > > > PETSc > > > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > > > > > I'm working in a cluster where I have no access to these > > > X11-related > > > > > > > > libraries and that's why I need them not be linked. Is it > there > > > some > > > > > way > > > > > > > to > > > > > > > > disable X11 when installing hwloc? maybe enforcing some > > > configuration > > > > > > > > variables when installing it through petsc or installing it > > > > > > > independently? > > > > > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > > > > > Below the configuration command of the two installations I've > > > tried > > > > > with > > > > > > > > the 3.13.0 version. > > > > > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > > > --with-debugging=0 > > > > > > > > --doCleanup=0 \ > > > > > > > > --with-mpi=1 \ > > > > > > > > --with-valgrind=1 > > > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > > > --download-scalapack \ > > > > > > > > --download-openblas \ > > > > > > > > --download-mumps \ > > > > > > > > --download-superlu_dist \ > > > > > > > > --download-metis \ > > > > > > > > --download-parmetis \ > > > > > > > > --download-ptscotch \ > > > > > > > > --download-hypre \ > > > > > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > > > --with-64-bit-indices=1 \ > > > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > > > --with-cxx-dialect=C++11 \ > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > > > > > > > > > the same as above but the options "--download-pastix > > > > > --download-hwloc" > > > > > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hwloc-x11=no.tar.gz Type: application/gzip Size: 888946 bytes Desc: not available URL: From mfadams at lbl.gov Mon Apr 6 16:00:14 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 6 Apr 2020 17:00:14 -0400 Subject: [petsc-users] strange TS adaptivity behavior In-Reply-To: <87a73o5rbi.fsf@jedbrown.org> References: <87a73o5rbi.fsf@jedbrown.org> Message-ID: On Mon, Apr 6, 2020 at 3:47 PM Jed Brown wrote: > This typically happens when your model is discontinuous or you activate > some fast transitien. You have a really aggressive lower bound on your > time step so its hard to tell here. > The time step goes down to like 1e-3. I am suspecting that my source term needs to be smoother and am working on that now. It is disconcerting that the truncation error goes up as the time stp is reduced, but in this example it does start to go down again. Thanks, Mark > > Mark Adams writes: > > > I have a problem that is fairly smooth and when it decreases the time > step > > it just keeps decreasing almost forever. I wanted to see if anyone has > any > > clue what is going on here. > > > > This test reaches a quasi equilibrium and the adaptivity uses fairly > large > > time steps with dU/dt - F(u,t) = 0. I set a max time step of 10. A source > > starts at one point (dU/dt - F(u,t) = S(t)). It cools pretty fast. > > > > THe truncation error increase a lot when the source starts but it does > not > > decrease the time step immediately because I had a mat time step set: > > .... > > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 4 > > [0] TSAdaptChoose_Basic(): Estimated scaled local *truncation error > > 0.000214757*, accepting step of size 10. > > TSAdapt basic arkimex 0:1bee step *386* accepted t=2292.68 + > > 1.000e+01 dt=1.000e+01 wlte=0.000215 wltea= -1 wlter= -1 > > ... > > [0] TSAdaptChoose_Basic(): Estimated scaled local *truncation error > > 0.500399,* accepting step of size 10. > > TSAdapt basic arkimex 0:1bee step *387* accepted t=2302.68 + > > 1.000e+01 dt=1.000e+01 wlte= 0.5 wltea= -1 wlter= -1 > > 388) species-0: charge density= -1.6422881272166e+01 z-momentum= > > 7.2180929658011e-03 energy= 9.7881753794625e+05 > > > > It goes for another few time steps before it rejects the time step for > the > > first time, and then it just keeps reducing and reducing: > > > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > > 0.939966, accepting step of size 10. > > TSAdapt basic arkimex 0:1bee step *395* accepted t=2382.68 + > > 1.000e+01 dt=9.283e+00 wlte= 0.94 wltea= -1 wlter= -1 > > .... > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 1.33533, > > *rejecting* step of size 9.28296 > > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > > 9.283e+00 dt=7.230e+00 wlte= 1.34 wltea= -1 wlter= -1 > > > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 1.44054, > > *rejecting* step of size 7.22994 > > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > > 7.230e+00 dt=2.711e+00 wlte= 1.44 wltea= -1 wlter= -1 > > > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 1.63529, > > *rejecting* step of size 2.71072 > > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > > 2.711e+00 dt=9.539e-01 wlte= 1.64 wltea= -1 wlter= -1 > > > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 1.60415, > > *rejecting* step of size 0.953894 > > TSAdapt basic arkimex 0:1bee step *396* rejected t=2392.68 + > > 9.539e-01 dt=5.000e-01 wlte= 1.6 wltea= -1 wlter= -1 > > > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > > 1.49714, *accepting > > because step size 0.5 is at minimum* > > TSAdapt basic arkimex 0:1bee step *396* accepted t=2392.68 + > > 5.000e-01 dt=5.000e-01 wlte= 1.5 wltea= -1 wlter= -1 > > > > Anyone have any idea what is going on here? > > > > Thanks, > > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 6 16:02:43 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 16:02:43 -0500 (CDT) Subject: [petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]Re: deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: The attachments didn't come through.. >>>> DENIAL OF SERVICE ALERT A denial of service protection limit was exceeded. The file has been removed. Context: 'hwloc-x11=no.tar.gz\hwloc-x11=no.tar' Reason: The data size limit was exceeded Limit: 10 MB Ticket Number : 0c20-5e8b-97c0-000f <<<<<< Can you retry the test below - i.e './configure --with-mpi=0 --download-hwloc --with-x=0' - and see if this works? What do you get for ldd [as shown below]? Satish On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > I've tried two installations in that branch, a "full installation" with > every package I need, and a minimal one. I attach the results here, these > libraries are still being linked > > On Mon, Apr 6, 2020 at 4:41 PM Satish Balay wrote: > > > Hm - its working for me on CentOS7 [I see you are using RHEL7/CentOS7] > > > > > > Can you try the branch balay/fix-hwloc-x-dependency/maint - and see what > > you get? > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 --download-hwloc > > --with-x=0 > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd arch-linux2-c-debug/lib/libhwloc.so > > > > > > linux-vdso.so.1 => (0x00007ffe88bae000) > > libm.so.6 => /lib64/libm.so.6 (0x00007ff75c7b9000) > > libc.so.6 => /lib64/libc.so.6 (0x00007ff75c3eb000) > > /lib64/ld-linux-x86-64.so.2 (0x00007ff75cd1a000) > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 --download-hwloc > > --with-x=1 > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd arch-linux2-c-debug/lib/libhwloc.so > > > > > > linux-vdso.so.1 => (0x00007ffe129cc000) > > libm.so.6 => /lib64/libm.so.6 (0x00007f89920fa000) > > libX11.so.6 => /lib64/libX11.so.6 (0x00007f8991dbc000) > > libc.so.6 => /lib64/libc.so.6 (0x00007f89919ee000) > > /lib64/ld-linux-x86-64.so.2 (0x00007f899265b000) > > libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f89917c6000) > > libdl.so.2 => /lib64/libdl.so.2 (0x00007f89915c2000) > > libXau.so.6 => /lib64/libXau.so.6 (0x00007f89913be000) > > [balay at petsc-c7 petsc:d409bfc]$ > > > > Satish > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > No, it doesn't work... I also tried --with-x=disabled. I gave a look to > > > ./configure --help in the hwloc directory and to the configure file > > itself, > > > but Im not finding the right option. > > > > > > On Mon, Apr 6, 2020 at 3:53 PM Satish Balay wrote: > > > > > > > Looks like the option is --with-x=no > > > > > > > > Can you give this a try? Branch balay/fix-hwloc-x-dependency/maint has > > > > this update now > > > > > > > > Satish > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > Hello Satish, Im sorry but I think I tested this workaround with the > > > > wrong > > > > > installation, my bad. > > > > > In fact the libXNVC, X11 libraries are still being linked even with > > the > > > > > --download-hwloc-configure-arguments=--without-x option > > > > > > > > > > Im attaching the config.log file located in > > > > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > > > > you can see that the option --without-x is there (line 7) but by > > means of > > > > > $ ldd foo > > > > > I see that the links are still there > > > > > > > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay > > wrote: > > > > > > > > > > > Great! > > > > > > > > > > > > Wrt pastix dependency on hwloc - > > > > > > config/BuildSystem/config/packages/PaStiX.py has the following > > comment: > > > > > > > > > > > > # PaStiX.py does not absolutely require hwloc, but it performs > > > > better > > > > > > with it and can fail (in ways not easily tested) without it > > > > > > # > > > > > > > > > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > > > > # > > https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that > > does > > > > not > > > > > > need the extra --download-hwloc-configure-arguments=--without-x > > > > option]. > > > > > > Can you give this a try? > > > > > > > > > > > > Satish > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > hello Satish, > > > > > > > adding > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > worked perfectly > > > > > > > > > > > > > > thank you! > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay > > > > wrote: > > > > > > > > > > > > > > > you can try: > > > > > > > > > > > > > > > > --download-pastix --download-hwloc > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > hello everyone, > > > > > > > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version with > > pastix. > > > > Like > > > > > > with > > > > > > > > > previous versions, I'm using the options > > > > > > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > > > > > > > between the linked libraries there appear: > > > > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > > > > > > > the first one related to NVIDIA. I observed that this does > > not > > > > happen > > > > > > > > when > > > > > > > > > installing PETSc without hwloc. In this new version, PETSc > > > > requires > > > > > > to > > > > > > > > > install hwloc when trying to install pastix. In previous > > > > versions of > > > > > > > > PETSc > > > > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > > > > > > > I'm working in a cluster where I have no access to these > > > > X11-related > > > > > > > > > libraries and that's why I need them not be linked. Is it > > there > > > > some > > > > > > way > > > > > > > > to > > > > > > > > > disable X11 when installing hwloc? maybe enforcing some > > > > configuration > > > > > > > > > variables when installing it through petsc or installing it > > > > > > > > independently? > > > > > > > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > > > > > > > Below the configuration command of the two installations I've > > > > tried > > > > > > with > > > > > > > > > the 3.13.0 version. > > > > > > > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > > > > --with-debugging=0 > > > > > > > > > --doCleanup=0 \ > > > > > > > > > --with-mpi=1 \ > > > > > > > > > --with-valgrind=1 > > > > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > > > > --download-scalapack \ > > > > > > > > > --download-openblas \ > > > > > > > > > --download-mumps \ > > > > > > > > > --download-superlu_dist \ > > > > > > > > > --download-metis \ > > > > > > > > > --download-parmetis \ > > > > > > > > > --download-ptscotch \ > > > > > > > > > --download-hypre \ > > > > > > > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > > > > --with-64-bit-indices=1 \ > > > > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > > > > --with-cxx-dialect=C++11 \ > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > > > > > > > > > > > the same as above but the options "--download-pastix > > > > > > --download-hwloc" > > > > > > > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From ajaramillopalma at gmail.com Mon Apr 6 16:05:58 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 18:05:58 -0300 Subject: [petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]Re: deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: ops... you can find the attachments here https://www.dropbox.com/sh/n8scz7wioe01t6p/AAC3P3iU1jLYsW6m3vs6S5DHa?dl=0 I got $ ldd minimal/lib/libhwloc.so linux-vdso.so.1 => (0x00007fffcc3ae000) libm.so.6 => /usr/lib64/libm.so.6 (0x00007ff0b90a2000) libXNVCtrl.so.0 => /usr/lib64/libXNVCtrl.so.0 (0x00007ff0b8e9d000) libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007ff0b8c8b000) libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007ff0b894d000) libc.so.6 => /usr/lib64/libc.so.6 (0x00007ff0b8580000) /lib64/ld-linux-x86-64.so.2 (0x00007ff0b9604000) libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007ff0b8358000) libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007ff0b8154000) libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007ff0b7f50000) On Mon, Apr 6, 2020 at 6:02 PM Satish Balay wrote: > The attachments didn't come through.. > > >>>> > DENIAL OF SERVICE ALERT > > A denial of service protection limit was exceeded. The file has been > removed. > Context: 'hwloc-x11=no.tar.gz\hwloc-x11=no.tar' > Reason: The data size limit was exceeded > Limit: 10 MB > Ticket Number : 0c20-5e8b-97c0-000f > <<<<<< > > Can you retry the test below - i.e './configure --with-mpi=0 > --download-hwloc --with-x=0' - and see if this works? > > What do you get for ldd [as shown below]? > > Satish > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > I've tried two installations in that branch, a "full installation" with > > every package I need, and a minimal one. I attach the results here, these > > libraries are still being linked > > > > On Mon, Apr 6, 2020 at 4:41 PM Satish Balay wrote: > > > > > Hm - its working for me on CentOS7 [I see you are using RHEL7/CentOS7] > > > > > > > > > Can you try the branch balay/fix-hwloc-x-dependency/maint - and see > what > > > you get? > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > --download-hwloc > > > --with-x=0 > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > linux-vdso.so.1 => (0x00007ffe88bae000) > > > libm.so.6 => /lib64/libm.so.6 (0x00007ff75c7b9000) > > > libc.so.6 => /lib64/libc.so.6 (0x00007ff75c3eb000) > > > /lib64/ld-linux-x86-64.so.2 (0x00007ff75cd1a000) > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > --download-hwloc > > > --with-x=1 > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > linux-vdso.so.1 => (0x00007ffe129cc000) > > > libm.so.6 => /lib64/libm.so.6 (0x00007f89920fa000) > > > libX11.so.6 => /lib64/libX11.so.6 (0x00007f8991dbc000) > > > libc.so.6 => /lib64/libc.so.6 (0x00007f89919ee000) > > > /lib64/ld-linux-x86-64.so.2 (0x00007f899265b000) > > > libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f89917c6000) > > > libdl.so.2 => /lib64/libdl.so.2 (0x00007f89915c2000) > > > libXau.so.6 => /lib64/libXau.so.6 (0x00007f89913be000) > > > [balay at petsc-c7 petsc:d409bfc]$ > > > > > > Satish > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > No, it doesn't work... I also tried --with-x=disabled. I gave a look > to > > > > ./configure --help in the hwloc directory and to the configure file > > > itself, > > > > but Im not finding the right option. > > > > > > > > On Mon, Apr 6, 2020 at 3:53 PM Satish Balay > wrote: > > > > > > > > > Looks like the option is --with-x=no > > > > > > > > > > Can you give this a try? Branch balay/fix-hwloc-x-dependency/maint > has > > > > > this update now > > > > > > > > > > Satish > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > Hello Satish, Im sorry but I think I tested this workaround with > the > > > > > wrong > > > > > > installation, my bad. > > > > > > In fact the libXNVC, X11 libraries are still being linked even > with > > > the > > > > > > --download-hwloc-configure-arguments=--without-x option > > > > > > > > > > > > Im attaching the config.log file located in > > > > > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > > > > > you can see that the option --without-x is there (line 7) but by > > > means of > > > > > > $ ldd foo > > > > > > I see that the links are still there > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay > > > wrote: > > > > > > > > > > > > > Great! > > > > > > > > > > > > > > Wrt pastix dependency on hwloc - > > > > > > > config/BuildSystem/config/packages/PaStiX.py has the following > > > comment: > > > > > > > > > > > > > > # PaStiX.py does not absolutely require hwloc, but it > performs > > > > > better > > > > > > > with it and can fail (in ways not easily tested) without it > > > > > > > # > > > > > > > > > > > > > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > > > > > # > > > https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > > > > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that > > > does > > > > > not > > > > > > > need the extra --download-hwloc-configure-arguments=--without-x > > > > > option]. > > > > > > > Can you give this a try? > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > hello Satish, > > > > > > > > adding > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > worked perfectly > > > > > > > > > > > > > > > > thank you! > > > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay < > balay at mcs.anl.gov> > > > > > wrote: > > > > > > > > > > > > > > > > > you can try: > > > > > > > > > > > > > > > > > > --download-pastix --download-hwloc > > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > > > hello everyone, > > > > > > > > > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version with > > > pastix. > > > > > Like > > > > > > > with > > > > > > > > > > previous versions, I'm using the options > > > > > > > > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > > > > > > > > > between the linked libraries there appear: > > > > > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > > > > > > > > > the first one related to NVIDIA. I observed that this > does > > > not > > > > > happen > > > > > > > > > when > > > > > > > > > > installing PETSc without hwloc. In this new version, > PETSc > > > > > requires > > > > > > > to > > > > > > > > > > install hwloc when trying to install pastix. In previous > > > > > versions of > > > > > > > > > PETSc > > > > > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > > > > > > > > > I'm working in a cluster where I have no access to these > > > > > X11-related > > > > > > > > > > libraries and that's why I need them not be linked. Is it > > > there > > > > > some > > > > > > > way > > > > > > > > > to > > > > > > > > > > disable X11 when installing hwloc? maybe enforcing some > > > > > configuration > > > > > > > > > > variables when installing it through petsc or installing > it > > > > > > > > > independently? > > > > > > > > > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > > > > > > > > > Below the configuration command of the two installations > I've > > > > > tried > > > > > > > with > > > > > > > > > > the 3.13.0 version. > > > > > > > > > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > > > > > --with-debugging=0 > > > > > > > > > > --doCleanup=0 \ > > > > > > > > > > --with-mpi=1 \ > > > > > > > > > > --with-valgrind=1 > > > > > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > > > > > --download-scalapack \ > > > > > > > > > > --download-openblas \ > > > > > > > > > > --download-mumps \ > > > > > > > > > > --download-superlu_dist \ > > > > > > > > > > --download-metis \ > > > > > > > > > > --download-parmetis \ > > > > > > > > > > --download-ptscotch \ > > > > > > > > > > --download-hypre \ > > > > > > > > > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > > > > > --with-64-bit-indices=1 \ > > > > > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > > > > > --with-cxx-dialect=C++11 \ > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > > > > > > > > > > > > > the same as above but the options "--download-pastix > > > > > > > --download-hwloc" > > > > > > > > > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 6 16:39:17 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 16:39:17 -0500 (CDT) Subject: [petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]Re: deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: I pushed another change to branch balay/fix-hwloc-x-dependency/maint Can you check again? BTW: The following options don't make sense when using --with-mpi=0 CPPFLAGS=-I/scratch/app/openmpi/4.0_gnu/include -I/scratch/app/mpc/1.0.3/include -I/scratch/app/isl/0.18/include -I/scratch/app/gcc/6.5/include LDFLAGS=-L/scratch/app/openmpi/4.0_gnu/lib -L/scratch/app/mpc/1.0.3/lib -L/scratch/app/isl/0.18/lib -L/scratch/app/gcc/6.5/lib64 -L/scratch/app/gcc/6.5/lib [And likely these options are missing appropriate quotes..] Satish On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > ops... you can find the attachments here > https://www.dropbox.com/sh/n8scz7wioe01t6p/AAC3P3iU1jLYsW6m3vs6S5DHa?dl=0 > > I got > > $ ldd minimal/lib/libhwloc.so > linux-vdso.so.1 => (0x00007fffcc3ae000) > libm.so.6 => /usr/lib64/libm.so.6 (0x00007ff0b90a2000) > libXNVCtrl.so.0 => /usr/lib64/libXNVCtrl.so.0 (0x00007ff0b8e9d000) > libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007ff0b8c8b000) > libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007ff0b894d000) > libc.so.6 => /usr/lib64/libc.so.6 (0x00007ff0b8580000) > /lib64/ld-linux-x86-64.so.2 (0x00007ff0b9604000) > libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007ff0b8358000) > libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007ff0b8154000) > libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007ff0b7f50000) > > On Mon, Apr 6, 2020 at 6:02 PM Satish Balay wrote: > > > The attachments didn't come through.. > > > > >>>> > > DENIAL OF SERVICE ALERT > > > > A denial of service protection limit was exceeded. The file has been > > removed. > > Context: 'hwloc-x11=no.tar.gz\hwloc-x11=no.tar' > > Reason: The data size limit was exceeded > > Limit: 10 MB > > Ticket Number : 0c20-5e8b-97c0-000f > > <<<<<< > > > > Can you retry the test below - i.e './configure --with-mpi=0 > > --download-hwloc --with-x=0' - and see if this works? > > > > What do you get for ldd [as shown below]? > > > > Satish > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > I've tried two installations in that branch, a "full installation" with > > > every package I need, and a minimal one. I attach the results here, these > > > libraries are still being linked > > > > > > On Mon, Apr 6, 2020 at 4:41 PM Satish Balay wrote: > > > > > > > Hm - its working for me on CentOS7 [I see you are using RHEL7/CentOS7] > > > > > > > > > > > > Can you try the branch balay/fix-hwloc-x-dependency/maint - and see > > what > > > > you get? > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > > --download-hwloc > > > > --with-x=0 > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > > > > linux-vdso.so.1 => (0x00007ffe88bae000) > > > > libm.so.6 => /lib64/libm.so.6 (0x00007ff75c7b9000) > > > > libc.so.6 => /lib64/libc.so.6 (0x00007ff75c3eb000) > > > > /lib64/ld-linux-x86-64.so.2 (0x00007ff75cd1a000) > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > > --download-hwloc > > > > --with-x=1 > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > > > > linux-vdso.so.1 => (0x00007ffe129cc000) > > > > libm.so.6 => /lib64/libm.so.6 (0x00007f89920fa000) > > > > libX11.so.6 => /lib64/libX11.so.6 (0x00007f8991dbc000) > > > > libc.so.6 => /lib64/libc.so.6 (0x00007f89919ee000) > > > > /lib64/ld-linux-x86-64.so.2 (0x00007f899265b000) > > > > libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f89917c6000) > > > > libdl.so.2 => /lib64/libdl.so.2 (0x00007f89915c2000) > > > > libXau.so.6 => /lib64/libXau.so.6 (0x00007f89913be000) > > > > [balay at petsc-c7 petsc:d409bfc]$ > > > > > > > > Satish > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > No, it doesn't work... I also tried --with-x=disabled. I gave a look > > to > > > > > ./configure --help in the hwloc directory and to the configure file > > > > itself, > > > > > but Im not finding the right option. > > > > > > > > > > On Mon, Apr 6, 2020 at 3:53 PM Satish Balay > > wrote: > > > > > > > > > > > Looks like the option is --with-x=no > > > > > > > > > > > > Can you give this a try? Branch balay/fix-hwloc-x-dependency/maint > > has > > > > > > this update now > > > > > > > > > > > > Satish > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > Hello Satish, Im sorry but I think I tested this workaround with > > the > > > > > > wrong > > > > > > > installation, my bad. > > > > > > > In fact the libXNVC, X11 libraries are still being linked even > > with > > > > the > > > > > > > --download-hwloc-configure-arguments=--without-x option > > > > > > > > > > > > > > Im attaching the config.log file located in > > > > > > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > > > > > > you can see that the option --without-x is there (line 7) but by > > > > means of > > > > > > > $ ldd foo > > > > > > > I see that the links are still there > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay > > > > wrote: > > > > > > > > > > > > > > > Great! > > > > > > > > > > > > > > > > Wrt pastix dependency on hwloc - > > > > > > > > config/BuildSystem/config/packages/PaStiX.py has the following > > > > comment: > > > > > > > > > > > > > > > > # PaStiX.py does not absolutely require hwloc, but it > > performs > > > > > > better > > > > > > > > with it and can fail (in ways not easily tested) without it > > > > > > > > # > > > > > > > > > > > > > > > > > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > > > > > > # > > > > https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > > > > > > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint [that > > > > does > > > > > > not > > > > > > > > need the extra --download-hwloc-configure-arguments=--without-x > > > > > > option]. > > > > > > > > Can you give this a try? > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > hello Satish, > > > > > > > > > adding > > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > worked perfectly > > > > > > > > > > > > > > > > > > thank you! > > > > > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay < > > balay at mcs.anl.gov> > > > > > > wrote: > > > > > > > > > > > > > > > > > > > you can try: > > > > > > > > > > > > > > > > > > > > --download-pastix --download-hwloc > > > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > > > > > hello everyone, > > > > > > > > > > > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version with > > > > pastix. > > > > > > Like > > > > > > > > with > > > > > > > > > > > previous versions, I'm using the options > > > > > > > > > > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > > > > > > > > > > > between the linked libraries there appear: > > > > > > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > > > > > > > > > > > the first one related to NVIDIA. I observed that this > > does > > > > not > > > > > > happen > > > > > > > > > > when > > > > > > > > > > > installing PETSc without hwloc. In this new version, > > PETSc > > > > > > requires > > > > > > > > to > > > > > > > > > > > install hwloc when trying to install pastix. In previous > > > > > > versions of > > > > > > > > > > PETSc > > > > > > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > > > > > > > > > > > I'm working in a cluster where I have no access to these > > > > > > X11-related > > > > > > > > > > > libraries and that's why I need them not be linked. Is it > > > > there > > > > > > some > > > > > > > > way > > > > > > > > > > to > > > > > > > > > > > disable X11 when installing hwloc? maybe enforcing some > > > > > > configuration > > > > > > > > > > > variables when installing it through petsc or installing > > it > > > > > > > > > > independently? > > > > > > > > > > > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > > > > > > > > > > > Below the configuration command of the two installations > > I've > > > > > > tried > > > > > > > > with > > > > > > > > > > > the 3.13.0 version. > > > > > > > > > > > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > > > > > > --with-debugging=0 > > > > > > > > > > > --doCleanup=0 \ > > > > > > > > > > > --with-mpi=1 \ > > > > > > > > > > > --with-valgrind=1 > > > > > > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > > > > > > --download-scalapack \ > > > > > > > > > > > --download-openblas \ > > > > > > > > > > > --download-mumps \ > > > > > > > > > > > --download-superlu_dist \ > > > > > > > > > > > --download-metis \ > > > > > > > > > > > --download-parmetis \ > > > > > > > > > > > --download-ptscotch \ > > > > > > > > > > > --download-hypre \ > > > > > > > > > > > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > > > > > > --with-64-bit-indices=1 \ > > > > > > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > > > > > > --with-cxx-dialect=C++11 \ > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > > > > > > > > > > > =================== WITHOUT PASTIX =================== > > > > > > > > > > > > > > > > > > > > > > the same as above but the options "--download-pastix > > > > > > > > --download-hwloc" > > > > > > > > > > > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From jed at jedbrown.org Mon Apr 6 17:06:19 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 06 Apr 2020 16:06:19 -0600 Subject: [petsc-users] strange TS adaptivity behavior In-Reply-To: References: <87a73o5rbi.fsf@jedbrown.org> Message-ID: <874ktw5kx0.fsf@jedbrown.org> Mark Adams writes: > On Mon, Apr 6, 2020 at 3:47 PM Jed Brown wrote: > >> This typically happens when your model is discontinuous or you activate >> some fast transitien. You have a really aggressive lower bound on your >> time step so its hard to tell here. >> > > The time step goes down to like 1e-3. > I am suspecting that my source term needs to be smoother and am working on > that now. > It is disconcerting that the truncation error goes up as the time stp is > reduced, but in this example it does start to go down again. If you increase resolution at differentiating a discontinuity, you'd expect to become more and more confident that the error is large. From rtmills at anl.gov Mon Apr 6 17:51:48 2020 From: rtmills at anl.gov (Mills, Richard Tran) Date: Mon, 6 Apr 2020 22:51:48 +0000 Subject: [petsc-users] Gather and Broadcast Parallel Vectors in k-means algorithm In-Reply-To: References: Message-ID: <0c10fc0a-3d86-e91a-f349-fd7c087ba8ed@anl.gov> Hi Eda, I think that you probably want to use VecScatter routines, as Junchao has suggested, instead of the lower level star forest for this. I believe that VecScatterCreateToZero() is what you want for the broadcast problem you describe, in the second part of your question. I'm not sure what you are trying to do in the first part. Taking a parallel vector and then copying its entire contents to a sequential vector residing on each process is not scalable, and a lot of the design that has gone into PETSc is to prevent the user from ever needing to do things like that. Can you please tell us what you intend to do with these sequential vectors? I'm also wondering why, later in your message, you say that you get cluster assignments from Matlab, and then "to cluster row vectors according to this information, all processors need to have all of the row vectors". Do you mean you want to get all of the row vectors copied onto all of the processors so that you can compute the cluster centroids? If so, computing the cluster centroids can be done without copying the row vectors onto all processors if you use a communication operation like MPI_Allreduce(). Lastly, let me add that I've done a fair amount of work implementing clustering algorithms on distributed memory parallel machines, but outside of PETSc. I was thinking that I should implement some of these routines using PETSc. I can't get to this immediately, but I'm wondering if you might care to tell me a bit more about the clustering problems you need to solve and how having some support for this in PETSc might (or might not) help. Best regards, Richard On 4/4/20 1:39 AM, Eda Oktay wrote: > Hi all, > > I created a parallel?vector UV, by using VecDuplicateVecs since I need > row vectors of a matrix. However, I need the whole vector be in all > processors, which means I need to gather all and broadcast them to all > processors. To gather, I tried to use VecStrideGatherAll: > > ? Vec UVG; > ? VecStrideGatherAll(UV,UVG,INSERT_VALUES); > ? VecView(UVG,PETSC_VIEWER_STDOUT_WORLD); > > ?however when I try to view the vector, I get the following error. > > [3]PETSC ERROR: Invalid argument > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > [3]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [3]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > [3]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr ?4 > 11:22:54 2020 > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > [0]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr ?4 > 11:22:54 2020 > [0]PETSC ERROR: Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [0]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr ?4 > 11:22:54 2020 > [1]PETSC ERROR: Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [1]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [3]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > > I couldn't understand?why I am getting this error. Is this because of > UV being created by VecDuplicateVecs? How can I solve this problem? > > The other question is broadcasting. After gathering all elements of > the vector UV, I need to broadcast them to all processors. I found > PetscSFBcastBegin. However, I couldn't understand the PetscSF concept > properly. I couldn't adjust my question to the star forest concept. > > My problem is: If I have 4 processors, I create a matrix whose columns > are 4 smallest eigenvectors, say of size 72. Then by defining each row > of this matrix as a vector, I cluster them by using k-means > clustering?algorithm. For now, I cluster them by using MATLAB and I > obtain a vector showing which row vector is in which cluster. After > getting this vector, to cluster row vectors according to this > information, all processors need to have all of the row vectors. > > According to this problem, how can I use the star forest concept? > > I will be glad if you can help me about this problem since I don't > have enough knowledge about graph theory. An if you have any idea > about how can I use k-means algorithm in a more practical way, please > let me know. > > Thanks! > > Eda From ajaramillopalma at gmail.com Mon Apr 6 18:24:57 2020 From: ajaramillopalma at gmail.com (Alfredo Jaramillo) Date: Mon, 6 Apr 2020 20:24:57 -0300 Subject: [petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]Re: deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: it worked fine! the previous output: [alfredo.jaramillo at sdumont14 petsc]$ ldd arch-linux2-c-debug-actual/lib/libhwloc.so linux-vdso.so.1 => (0x00007ffde6f49000) libm.so.6 => /usr/lib64/libm.so.6 (0x00007fb8be4ed000) libXNVCtrl.so.0 => /usr/lib64/libXNVCtrl.so.0 (0x00007fb8be2e8000) libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007fb8be0d6000) libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007fb8bdd98000) libc.so.6 => /usr/lib64/libc.so.6 (0x00007fb8bd9cb000) /lib64/ld-linux-x86-64.so.2 (0x00007fb8bea4f000) libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007fb8bd7a3000) libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007fb8bd59f000) libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007fb8bd39b000) the updated output: [alfredo.jaramillo at sdumont14 petsc]$ ldd arch-linux2-c-debug-previous/lib/libhwloc.so linux-vdso.so.1 => (0x00007fff135c7000) libm.so.6 => /usr/lib64/libm.so.6 (0x00007f5ecd5f3000) libc.so.6 => /usr/lib64/libc.so.6 (0x00007f5ecd226000) /lib64/ld-linux-x86-64.so.2 (0x00007f5ecdb54000 I added the option --download-hwloc-configure-arguments="--disable-opencl --disable-cuda --disable-nvml --disable-gl" to my installation and pastix is working fine, thank you very much regards On Mon, Apr 6, 2020 at 6:39 PM Satish Balay wrote: > I pushed another change to branch balay/fix-hwloc-x-dependency/maint > > Can you check again? > > BTW: The following options don't make sense when using --with-mpi=0 > > CPPFLAGS=-I/scratch/app/openmpi/4.0_gnu/include > -I/scratch/app/mpc/1.0.3/include -I/scratch/app/isl/0.18/include > -I/scratch/app/gcc/6.5/include LDFLAGS=-L/scratch/app/openmpi/4.0_gnu/lib > -L/scratch/app/mpc/1.0.3/lib -L/scratch/app/isl/0.18/lib > -L/scratch/app/gcc/6.5/lib64 -L/scratch/app/gcc/6.5/lib > > [And likely these options are missing appropriate quotes..] > > Satish > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > ops... you can find the attachments here > > > https://www.dropbox.com/sh/n8scz7wioe01t6p/AAC3P3iU1jLYsW6m3vs6S5DHa?dl=0 > > > > I got > > > > $ ldd minimal/lib/libhwloc.so > > linux-vdso.so.1 => (0x00007fffcc3ae000) > > libm.so.6 => /usr/lib64/libm.so.6 (0x00007ff0b90a2000) > > libXNVCtrl.so.0 => /usr/lib64/libXNVCtrl.so.0 (0x00007ff0b8e9d000) > > libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007ff0b8c8b000) > > libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007ff0b894d000) > > libc.so.6 => /usr/lib64/libc.so.6 (0x00007ff0b8580000) > > /lib64/ld-linux-x86-64.so.2 (0x00007ff0b9604000) > > libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007ff0b8358000) > > libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007ff0b8154000) > > libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007ff0b7f50000) > > > > On Mon, Apr 6, 2020 at 6:02 PM Satish Balay wrote: > > > > > The attachments didn't come through.. > > > > > > >>>> > > > DENIAL OF SERVICE ALERT > > > > > > A denial of service protection limit was exceeded. The file has been > > > removed. > > > Context: 'hwloc-x11=no.tar.gz\hwloc-x11=no.tar' > > > Reason: The data size limit was exceeded > > > Limit: 10 MB > > > Ticket Number : 0c20-5e8b-97c0-000f > > > <<<<<< > > > > > > Can you retry the test below - i.e './configure --with-mpi=0 > > > --download-hwloc --with-x=0' - and see if this works? > > > > > > What do you get for ldd [as shown below]? > > > > > > Satish > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > I've tried two installations in that branch, a "full installation" > with > > > > every package I need, and a minimal one. I attach the results here, > these > > > > libraries are still being linked > > > > > > > > On Mon, Apr 6, 2020 at 4:41 PM Satish Balay > wrote: > > > > > > > > > Hm - its working for me on CentOS7 [I see you are using > RHEL7/CentOS7] > > > > > > > > > > > > > > > Can you try the branch balay/fix-hwloc-x-dependency/maint - and see > > > what > > > > > you get? > > > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > > > --download-hwloc > > > > > --with-x=0 > > > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > > > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > > > > > > > linux-vdso.so.1 => (0x00007ffe88bae000) > > > > > libm.so.6 => /lib64/libm.so.6 (0x00007ff75c7b9000) > > > > > libc.so.6 => /lib64/libc.so.6 (0x00007ff75c3eb000) > > > > > /lib64/ld-linux-x86-64.so.2 (0x00007ff75cd1a000) > > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > > > --download-hwloc > > > > > --with-x=1 > > > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > > > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > > > > > > > linux-vdso.so.1 => (0x00007ffe129cc000) > > > > > libm.so.6 => /lib64/libm.so.6 (0x00007f89920fa000) > > > > > libX11.so.6 => /lib64/libX11.so.6 (0x00007f8991dbc000) > > > > > libc.so.6 => /lib64/libc.so.6 (0x00007f89919ee000) > > > > > /lib64/ld-linux-x86-64.so.2 (0x00007f899265b000) > > > > > libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f89917c6000) > > > > > libdl.so.2 => /lib64/libdl.so.2 (0x00007f89915c2000) > > > > > libXau.so.6 => /lib64/libXau.so.6 (0x00007f89913be000) > > > > > [balay at petsc-c7 petsc:d409bfc]$ > > > > > > > > > > Satish > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > No, it doesn't work... I also tried --with-x=disabled. I gave a > look > > > to > > > > > > ./configure --help in the hwloc directory and to the configure > file > > > > > itself, > > > > > > but Im not finding the right option. > > > > > > > > > > > > On Mon, Apr 6, 2020 at 3:53 PM Satish Balay > > > wrote: > > > > > > > > > > > > > Looks like the option is --with-x=no > > > > > > > > > > > > > > Can you give this a try? Branch > balay/fix-hwloc-x-dependency/maint > > > has > > > > > > > this update now > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > Hello Satish, Im sorry but I think I tested this workaround > with > > > the > > > > > > > wrong > > > > > > > > installation, my bad. > > > > > > > > In fact the libXNVC, X11 libraries are still being linked > even > > > with > > > > > the > > > > > > > > --download-hwloc-configure-arguments=--without-x option > > > > > > > > > > > > > > > > Im attaching the config.log file located in > > > > > > > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > > > > > > > you can see that the option --without-x is there (line 7) > but by > > > > > means of > > > > > > > > $ ldd foo > > > > > > > > I see that the links are still there > > > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay < > balay at mcs.anl.gov> > > > > > wrote: > > > > > > > > > > > > > > > > > Great! > > > > > > > > > > > > > > > > > > Wrt pastix dependency on hwloc - > > > > > > > > > config/BuildSystem/config/packages/PaStiX.py has the > following > > > > > comment: > > > > > > > > > > > > > > > > > > # PaStiX.py does not absolutely require hwloc, but it > > > performs > > > > > > > better > > > > > > > > > with it and can fail (in ways not easily tested) without it > > > > > > > > > # > > > > > > > > > > > > > > > > > > > > > > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > > > > > > > # > > > > > https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > > > > > > > > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint > [that > > > > > does > > > > > > > not > > > > > > > > > need the extra > --download-hwloc-configure-arguments=--without-x > > > > > > > option]. > > > > > > > > > Can you give this a try? > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > > > hello Satish, > > > > > > > > > > adding > > > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > worked perfectly > > > > > > > > > > > > > > > > > > > > thank you! > > > > > > > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay < > > > balay at mcs.anl.gov> > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > you can try: > > > > > > > > > > > > > > > > > > > > > > --download-pastix --download-hwloc > > > > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > > > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > > > > > > > hello everyone, > > > > > > > > > > > > > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version > with > > > > > pastix. > > > > > > > Like > > > > > > > > > with > > > > > > > > > > > > previous versions, I'm using the options > > > > > > > > > > > > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > > > > > > > > > > > > > between the linked libraries there appear: > > > > > > > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > > > > > > > > > > > > > the first one related to NVIDIA. I observed that this > > > does > > > > > not > > > > > > > happen > > > > > > > > > > > when > > > > > > > > > > > > installing PETSc without hwloc. In this new version, > > > PETSc > > > > > > > requires > > > > > > > > > to > > > > > > > > > > > > install hwloc when trying to install pastix. In > previous > > > > > > > versions of > > > > > > > > > > > PETSc > > > > > > > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > > > > > > > > > > > > > I'm working in a cluster where I have no access to > these > > > > > > > X11-related > > > > > > > > > > > > libraries and that's why I need them not be linked. > Is it > > > > > there > > > > > > > some > > > > > > > > > way > > > > > > > > > > > to > > > > > > > > > > > > disable X11 when installing hwloc? maybe enforcing > some > > > > > > > configuration > > > > > > > > > > > > variables when installing it through petsc or > installing > > > it > > > > > > > > > > > independently? > > > > > > > > > > > > > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > > > > > > > > > > > > > Below the configuration command of the two > installations > > > I've > > > > > > > tried > > > > > > > > > with > > > > > > > > > > > > the 3.13.0 version. > > > > > > > > > > > > > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > > > > > > > --with-debugging=0 > > > > > > > > > > > > --doCleanup=0 \ > > > > > > > > > > > > --with-mpi=1 \ > > > > > > > > > > > > --with-valgrind=1 > > > > > > > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > > > > > > > --download-scalapack \ > > > > > > > > > > > > --download-openblas \ > > > > > > > > > > > > --download-mumps \ > > > > > > > > > > > > --download-superlu_dist \ > > > > > > > > > > > > --download-metis \ > > > > > > > > > > > > --download-parmetis \ > > > > > > > > > > > > --download-ptscotch \ > > > > > > > > > > > > --download-hypre \ > > > > > > > > > > > > > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > > > > > > > --with-64-bit-indices=1 \ > > > > > > > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > > > > > > > --with-cxx-dialect=C++11 \ > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > > > > > > > > > > > > > =================== WITHOUT PASTIX > =================== > > > > > > > > > > > > > > > > > > > > > > > > the same as above but the options "--download-pastix > > > > > > > > > --download-hwloc" > > > > > > > > > > > > > > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexprescott at email.arizona.edu Mon Apr 6 20:43:42 2020 From: alexprescott at email.arizona.edu (Alexander B Prescott) Date: Mon, 6 Apr 2020 18:43:42 -0700 Subject: [petsc-users] [EXT]Re: Discontinuities in the Jacobian matrix for nonlinear problem In-Reply-To: References: Message-ID: Hi Nathan, Thanks for the thoughtful response. For the 1D toy version I force flow to occur at every node, but my ultimate objective is to apply this to a 2D floodplain in which a node may or may not have flow in the 'true' solution. Based on what you point out, however, it seems that this approach may be doomed to fail. For one thing, using a 5 point stencil, a node with no flow and whose neighbor's have no flow will produce a row in the 'bent physics' Jacobian of all zero entries. Based on my experience so far that will cause PETSc to return errors. Does that sound right to you? I agree that the difficulty encountered with this sort of problem is counterintuitive to the relative simplification made on nature! Best, Alexander On Mon, Apr 6, 2020 at 12:45 PM Nathan Collier wrote: > *External Email* > Alexander, > > I am not familiar with your specific model, but I do have experience > working with the diffusive / kinematic wave approximation to shallow water > equations. I always had a lot of trouble near ponding conditions (steady > state), which is odd because when nature says do nothing, you don't expect > the equations to be hard to solve. Some kind soul pointed out to me that > this was because the equations are derived using a power law model relating > flow to slope/depth which breaks down when you should have no flow. So as > long as you have flow, the solver behaves well and things are fine (i.e. > smooth terrain, academic test problems), but when you have local ponding > due to real terrain effects, you are hosed. A better solution approach > doesn't help if your equations aren't meant for the problem you are trying > to solve. > > I think 'bending' the physics is the right idea, but I never had much luck > myself with this. > > Nate > > > > > > > On Mon, Apr 6, 2020 at 2:56 PM Zou, Ling via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> What about ?bending? the physics a bit? >> >> >> >> Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), >> if x[i-1]+z[i-1] > x[i]+z[i] + eps >> >> Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), >> if x[i]+z[i] > x[i-1]+z[i-1] + eps >> >> Q_i-1/2 proportional to x[i-1] + z[i-1] - (x[i] + >> z[i]) in between >> >> >> >> in which, eps is a very small positive number. >> >> >> >> -Ling >> >> >> >> >> >> *From: *petsc-users on behalf of >> Alexander B Prescott >> *Date: *Monday, April 6, 2020 at 1:06 PM >> *To: *PETSc >> *Subject: *[petsc-users] Discontinuities in the Jacobian matrix for >> nonlinear problem >> >> >> >> Hello, >> >> >> >> The non-linear boundary-value problem I am applying PETSc to is a >> relatively simple steady-state flow routing algorithm based on the >> continuity equation, such that Div(Q) = 0 everywhere (Q=discharge). I use a >> finite volume approach to calculate flow between nodes, with Q calculated >> as a piecewise smooth function of the local flow depth and the >> water-surface slope. In 1D, the residual is calculated as R(x_i)=Q_i-1/2 - >> Q_i+1/2. >> >> For example, Q_i-1/2 at x[i]: >> >> Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), >> if x[i-1]+z[i-1] > x[i]+z[i] >> >> Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), >> if x[i]+z[i] > x[i-1]+z[i-1] >> >> >> >> Where z[i] is local topography and doesn't change over the iterations, >> and Q_i+1/2 is computed analogously. So the residual derivatives with >> respect to x[i-1], x[i] and x[i+1] are not continuous when the >> water-surface slope = 0. >> >> >> >> Are there intelligent ways to handle this problem? My 1D trial runs >> naively fix any zero-valued water-surface slopes to a small non-zero >> positive value (e.g. 1e-12). Solver convergence has been mixed and highly >> dependent on the initial guess. So far, FAS with QN coarse solver has been >> the most robust. >> >> >> >> Restricting x[i] to be non-negative is a separate issue, to which I have >> applied the SNES_VI solvers. They perform modestly but have been less >> robust. >> >> >> >> Best, >> >> Alexander >> >> >> >> >> >> >> -- >> >> Alexander Prescott >> >> alexprescott at email.arizona.edu >> >> PhD Candidate, The University of Arizona >> >> Department of Geosciences >> >> 1040 E. 4th Street >> >> Tucson, AZ, 85721 >> > -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lzou at anl.gov Mon Apr 6 21:24:28 2020 From: lzou at anl.gov (Zou, Ling) Date: Tue, 7 Apr 2020 02:24:28 +0000 Subject: [petsc-users] [EXT]Re: Discontinuities in the Jacobian matrix for nonlinear problem In-Reply-To: References: Message-ID: Ok, I do not pretend to be shallow water expert here. I only played with it maybe ten years ago. When you say 5 point stencil in 1d flow, that sounds so much like a finite volume method with TVD reconstruction. I solved many 1d flow equation using PETSc (Euler eqn., single-phase flow eqn., two-phase flow eqn.), had a lot of ?zero-flow? conditions, so far PETSc works pretty well for me. -Ling From: Alexander B Prescott Date: Monday, April 6, 2020 at 8:44 PM To: Nathan Collier Cc: "Zou, Ling" , PETSc Subject: Re: [EXT]Re: [petsc-users] Discontinuities in the Jacobian matrix for nonlinear problem Hi Nathan, Thanks for the thoughtful response. For the 1D toy version I force flow to occur at every node, but my ultimate objective is to apply this to a 2D floodplain in which a node may or may not have flow in the 'true' solution. Based on what you point out, however, it seems that this approach may be doomed to fail. For one thing, using a 5 point stencil, a node with no flow and whose neighbor's have no flow will produce a row in the 'bent physics' Jacobian of all zero entries. Based on my experience so far that will cause PETSc to return errors. Does that sound right to you? I agree that the difficulty encountered with this sort of problem is counterintuitive to the relative simplification made on nature! Best, Alexander On Mon, Apr 6, 2020 at 12:45 PM Nathan Collier > wrote: External Email Alexander, I am not familiar with your specific model, but I do have experience working with the diffusive / kinematic wave approximation to shallow water equations. I always had a lot of trouble near ponding conditions (steady state), which is odd because when nature says do nothing, you don't expect the equations to be hard to solve. Some kind soul pointed out to me that this was because the equations are derived using a power law model relating flow to slope/depth which breaks down when you should have no flow. So as long as you have flow, the solver behaves well and things are fine (i.e. smooth terrain, academic test problems), but when you have local ponding due to real terrain effects, you are hosed. A better solution approach doesn't help if your equations aren't meant for the problem you are trying to solve. I think 'bending' the physics is the right idea, but I never had much luck myself with this. Nate On Mon, Apr 6, 2020 at 2:56 PM Zou, Ling via petsc-users > wrote: What about ?bending? the physics a bit? Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), if x[i-1]+z[i-1] > x[i]+z[i] + eps Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), if x[i]+z[i] > x[i-1]+z[i-1] + eps Q_i-1/2 proportional to x[i-1] + z[i-1] - (x[i] + z[i]) in between in which, eps is a very small positive number. -Ling From: petsc-users > on behalf of Alexander B Prescott > Date: Monday, April 6, 2020 at 1:06 PM To: PETSc > Subject: [petsc-users] Discontinuities in the Jacobian matrix for nonlinear problem Hello, The non-linear boundary-value problem I am applying PETSc to is a relatively simple steady-state flow routing algorithm based on the continuity equation, such that Div(Q) = 0 everywhere (Q=discharge). I use a finite volume approach to calculate flow between nodes, with Q calculated as a piecewise smooth function of the local flow depth and the water-surface slope. In 1D, the residual is calculated as R(x_i)=Q_i-1/2 - Q_i+1/2. For example, Q_i-1/2 at x[i]: Q_i-1/2 proportional to sqrt(x[i-1] + z[i-1] - (x[i] + z[i])), if x[i-1]+z[i-1] > x[i]+z[i] Q_i-1/2 proportional to -1.0*sqrt(x[i] + z[i] - (x[i-1] + z[i-1])), if x[i]+z[i] > x[i-1]+z[i-1] Where z[i] is local topography and doesn't change over the iterations, and Q_i+1/2 is computed analogously. So the residual derivatives with respect to x[i-1], x[i] and x[i+1] are not continuous when the water-surface slope = 0. Are there intelligent ways to handle this problem? My 1D trial runs naively fix any zero-valued water-surface slopes to a small non-zero positive value (e.g. 1e-12). Solver convergence has been mixed and highly dependent on the initial guess. So far, FAS with QN coarse solver has been the most robust. Restricting x[i] to be non-negative is a separate issue, to which I have applied the SNES_VI solvers. They perform modestly but have been less robust. Best, Alexander -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -- Alexander Prescott alexprescott at email.arizona.edu PhD Candidate, The University of Arizona Department of Geosciences 1040 E. 4th Street Tucson, AZ, 85721 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 6 22:14:31 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Apr 2020 22:14:31 -0500 (CDT) Subject: [petsc-users] [WARNING: UNSCANNABLE EXTRACTION FAILED]Re: deactivating x11 in PETSc 3.13.0 with pastix/hwloc In-Reply-To: References: Message-ID: Great! Will have this change in the next petsc update. Satish On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > it worked fine! > > the previous output: > > [alfredo.jaramillo at sdumont14 petsc]$ ldd > arch-linux2-c-debug-actual/lib/libhwloc.so > linux-vdso.so.1 => (0x00007ffde6f49000) > libm.so.6 => /usr/lib64/libm.so.6 (0x00007fb8be4ed000) > libXNVCtrl.so.0 => /usr/lib64/libXNVCtrl.so.0 (0x00007fb8be2e8000) > libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007fb8be0d6000) > libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007fb8bdd98000) > libc.so.6 => /usr/lib64/libc.so.6 (0x00007fb8bd9cb000) > /lib64/ld-linux-x86-64.so.2 (0x00007fb8bea4f000) > libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007fb8bd7a3000) > libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007fb8bd59f000) > libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007fb8bd39b000) > > the updated output: > > [alfredo.jaramillo at sdumont14 petsc]$ ldd > arch-linux2-c-debug-previous/lib/libhwloc.so > linux-vdso.so.1 => (0x00007fff135c7000) > libm.so.6 => /usr/lib64/libm.so.6 (0x00007f5ecd5f3000) > libc.so.6 => /usr/lib64/libc.so.6 (0x00007f5ecd226000) > /lib64/ld-linux-x86-64.so.2 (0x00007f5ecdb54000 > > I added the option --download-hwloc-configure-arguments="--disable-opencl > --disable-cuda --disable-nvml --disable-gl" to my installation and pastix > is working fine, > > thank you very much > regards > > On Mon, Apr 6, 2020 at 6:39 PM Satish Balay wrote: > > > I pushed another change to branch balay/fix-hwloc-x-dependency/maint > > > > Can you check again? > > > > BTW: The following options don't make sense when using --with-mpi=0 > > > > CPPFLAGS=-I/scratch/app/openmpi/4.0_gnu/include > > -I/scratch/app/mpc/1.0.3/include -I/scratch/app/isl/0.18/include > > -I/scratch/app/gcc/6.5/include LDFLAGS=-L/scratch/app/openmpi/4.0_gnu/lib > > -L/scratch/app/mpc/1.0.3/lib -L/scratch/app/isl/0.18/lib > > -L/scratch/app/gcc/6.5/lib64 -L/scratch/app/gcc/6.5/lib > > > > [And likely these options are missing appropriate quotes..] > > > > Satish > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > ops... you can find the attachments here > > > > > https://www.dropbox.com/sh/n8scz7wioe01t6p/AAC3P3iU1jLYsW6m3vs6S5DHa?dl=0 > > > > > > I got > > > > > > $ ldd minimal/lib/libhwloc.so > > > linux-vdso.so.1 => (0x00007fffcc3ae000) > > > libm.so.6 => /usr/lib64/libm.so.6 (0x00007ff0b90a2000) > > > libXNVCtrl.so.0 => /usr/lib64/libXNVCtrl.so.0 (0x00007ff0b8e9d000) > > > libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007ff0b8c8b000) > > > libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007ff0b894d000) > > > libc.so.6 => /usr/lib64/libc.so.6 (0x00007ff0b8580000) > > > /lib64/ld-linux-x86-64.so.2 (0x00007ff0b9604000) > > > libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007ff0b8358000) > > > libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007ff0b8154000) > > > libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007ff0b7f50000) > > > > > > On Mon, Apr 6, 2020 at 6:02 PM Satish Balay wrote: > > > > > > > The attachments didn't come through.. > > > > > > > > >>>> > > > > DENIAL OF SERVICE ALERT > > > > > > > > A denial of service protection limit was exceeded. The file has been > > > > removed. > > > > Context: 'hwloc-x11=no.tar.gz\hwloc-x11=no.tar' > > > > Reason: The data size limit was exceeded > > > > Limit: 10 MB > > > > Ticket Number : 0c20-5e8b-97c0-000f > > > > <<<<<< > > > > > > > > Can you retry the test below - i.e './configure --with-mpi=0 > > > > --download-hwloc --with-x=0' - and see if this works? > > > > > > > > What do you get for ldd [as shown below]? > > > > > > > > Satish > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > I've tried two installations in that branch, a "full installation" > > with > > > > > every package I need, and a minimal one. I attach the results here, > > these > > > > > libraries are still being linked > > > > > > > > > > On Mon, Apr 6, 2020 at 4:41 PM Satish Balay > > wrote: > > > > > > > > > > > Hm - its working for me on CentOS7 [I see you are using > > RHEL7/CentOS7] > > > > > > > > > > > > > > > > > > Can you try the branch balay/fix-hwloc-x-dependency/maint - and see > > > > what > > > > > > you get? > > > > > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > > > > --download-hwloc > > > > > > --with-x=0 > > > > > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > > > > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > > > > > > > > > > linux-vdso.so.1 => (0x00007ffe88bae000) > > > > > > libm.so.6 => /lib64/libm.so.6 (0x00007ff75c7b9000) > > > > > > libc.so.6 => /lib64/libc.so.6 (0x00007ff75c3eb000) > > > > > > /lib64/ld-linux-x86-64.so.2 (0x00007ff75cd1a000) > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ./configure --with-mpi=0 > > > > --download-hwloc > > > > > > --with-x=1 > > > > > > > > > > > > [balay at petsc-c7 petsc:d409bfc]$ ldd > > > > arch-linux2-c-debug/lib/libhwloc.so > > > > > > > > > > > > > > > > > > linux-vdso.so.1 => (0x00007ffe129cc000) > > > > > > libm.so.6 => /lib64/libm.so.6 (0x00007f89920fa000) > > > > > > libX11.so.6 => /lib64/libX11.so.6 (0x00007f8991dbc000) > > > > > > libc.so.6 => /lib64/libc.so.6 (0x00007f89919ee000) > > > > > > /lib64/ld-linux-x86-64.so.2 (0x00007f899265b000) > > > > > > libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f89917c6000) > > > > > > libdl.so.2 => /lib64/libdl.so.2 (0x00007f89915c2000) > > > > > > libXau.so.6 => /lib64/libXau.so.6 (0x00007f89913be000) > > > > > > [balay at petsc-c7 petsc:d409bfc]$ > > > > > > > > > > > > Satish > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > No, it doesn't work... I also tried --with-x=disabled. I gave a > > look > > > > to > > > > > > > ./configure --help in the hwloc directory and to the configure > > file > > > > > > itself, > > > > > > > but Im not finding the right option. > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 3:53 PM Satish Balay > > > > wrote: > > > > > > > > > > > > > > > Looks like the option is --with-x=no > > > > > > > > > > > > > > > > Can you give this a try? Branch > > balay/fix-hwloc-x-dependency/maint > > > > has > > > > > > > > this update now > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > Hello Satish, Im sorry but I think I tested this workaround > > with > > > > the > > > > > > > > wrong > > > > > > > > > installation, my bad. > > > > > > > > > In fact the libXNVC, X11 libraries are still being linked > > even > > > > with > > > > > > the > > > > > > > > > --download-hwloc-configure-arguments=--without-x option > > > > > > > > > > > > > > > > > > Im attaching the config.log file located in > > > > > > > > > $PETSC_DIR/$PETSC_ARCH/externalpackages/hwloc-2.1.0 > > > > > > > > > you can see that the option --without-x is there (line 7) > > but by > > > > > > means of > > > > > > > > > $ ldd foo > > > > > > > > > I see that the links are still there > > > > > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:57 PM Satish Balay < > > balay at mcs.anl.gov> > > > > > > wrote: > > > > > > > > > > > > > > > > > > > Great! > > > > > > > > > > > > > > > > > > > > Wrt pastix dependency on hwloc - > > > > > > > > > > config/BuildSystem/config/packages/PaStiX.py has the > > following > > > > > > comment: > > > > > > > > > > > > > > > > > > > > # PaStiX.py does not absolutely require hwloc, but it > > > > performs > > > > > > > > better > > > > > > > > > > with it and can fail (in ways not easily tested) without it > > > > > > > > > > # > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > > > > > > > > > > # > > > > > > https://solverstack.gitlabpages.inria.fr/pastix/Bindings.html > > > > > > > > > > > > > > > > > > > > I have a fix in branch balay/fix-hwloc-x-dependency/maint > > [that > > > > > > does > > > > > > > > not > > > > > > > > > > need the extra > > --download-hwloc-configure-arguments=--without-x > > > > > > > > option]. > > > > > > > > > > Can you give this a try? > > > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > > > > > hello Satish, > > > > > > > > > > > adding > > > > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > worked perfectly > > > > > > > > > > > > > > > > > > > > > > thank you! > > > > > > > > > > > > > > > > > > > > > > On Mon, Apr 6, 2020 at 2:12 PM Satish Balay < > > > > balay at mcs.anl.gov> > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > you can try: > > > > > > > > > > > > > > > > > > > > > > > > --download-pastix --download-hwloc > > > > > > > > > > > > --download-hwloc-configure-arguments=--without-x > > > > > > > > > > > > > > > > > > > > > > > > We should fix this to automatically use --with-x=0/1 > > > > > > > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > On Mon, 6 Apr 2020, Alfredo Jaramillo wrote: > > > > > > > > > > > > > > > > > > > > > > > > > hello everyone, > > > > > > > > > > > > > > > > > > > > > > > > > > I have a fresh installation of the 3.13.0 version > > with > > > > > > pastix. > > > > > > > > Like > > > > > > > > > > with > > > > > > > > > > > > > previous versions, I'm using the options > > > > > > > > > > > > > > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 > > > > > > > > > > > > > > > > > > > > > > > > > > to disable X11 > > > > > > > > > > > > > > > > > > > > > > > > > > however, when compiling my program foo and doing > > > > > > > > > > > > > > > > > > > > > > > > > > $ ldd foo > > > > > > > > > > > > > > > > > > > > > > > > > > between the linked libraries there appear: > > > > > > > > > > > > > libXNVCtrl.so.0 and libX11.so.6 > > > > > > > > > > > > > > > > > > > > > > > > > > the first one related to NVIDIA. I observed that this > > > > does > > > > > > not > > > > > > > > happen > > > > > > > > > > > > when > > > > > > > > > > > > > installing PETSc without hwloc. In this new version, > > > > PETSc > > > > > > > > requires > > > > > > > > > > to > > > > > > > > > > > > > install hwloc when trying to install pastix. In > > previous > > > > > > > > versions of > > > > > > > > > > > > PETSc > > > > > > > > > > > > > (eg 3.11.2) that wasn't necessary. > > > > > > > > > > > > > > > > > > > > > > > > > > I'm working in a cluster where I have no access to > > these > > > > > > > > X11-related > > > > > > > > > > > > > libraries and that's why I need them not be linked. > > Is it > > > > > > there > > > > > > > > some > > > > > > > > > > way > > > > > > > > > > > > to > > > > > > > > > > > > > disable X11 when installing hwloc? maybe enforcing > > some > > > > > > > > configuration > > > > > > > > > > > > > variables when installing it through petsc or > > installing > > > > it > > > > > > > > > > > > independently? > > > > > > > > > > > > > > > > > > > > > > > > > > thanks a lot! > > > > > > > > > > > > > > > > > > > > > > > > > > Below the configuration command of the two > > installations > > > > I've > > > > > > > > tried > > > > > > > > > > with > > > > > > > > > > > > > the 3.13.0 version. > > > > > > > > > > > > > > > > > > > > > > > > > > =================== WITH PASTIX =================== > > > > > > > > > > > > > > > > > > > > > > > > > > ./configure --with-make-np=20 > > > > > > > > > > > > > --with-petsc-arch=x64go-3.13-openmpi-4.0.1-pastix-64 > > > > > > > > > > --with-debugging=0 > > > > > > > > > > > > > --doCleanup=0 \ > > > > > > > > > > > > > --with-mpi=1 \ > > > > > > > > > > > > > --with-valgrind=1 > > > > > > > > > > --with-valgrind-dir=$PATH_TO_VALGRIND/valgrind-3.15.0 \ > > > > > > > > > > > > > --download-scalapack \ > > > > > > > > > > > > > --download-openblas \ > > > > > > > > > > > > > --download-mumps \ > > > > > > > > > > > > > --download-superlu_dist \ > > > > > > > > > > > > > --download-metis \ > > > > > > > > > > > > > --download-parmetis \ > > > > > > > > > > > > > --download-ptscotch \ > > > > > > > > > > > > > --download-hypre \ > > > > > > > > > > > > > > > > > > > > > > > > > > *--download-pastix \--download-hwloc \* > > > > > > > > > > > > > --with-64-bit-indices=1 \ > > > > > > > > > > > > > LDFLAGS=$LDFLAGS CPPFLAGS=$CPPFLAGS \ > > > > > > > > > > > > > --with-cxx-dialect=C++11 \ > > > > > > > > > > > > > --with-x11=0 --with-x=0 --with-windows-graphics=0 \ > > > > > > > > > > > > > COPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > > > > CXXOPTFLAGS="-O3 -march=native -mtune=native" \ > > > > > > > > > > > > > FOPTFLAGS="-O3 -march=native -mtune=native" > > > > > > > > > > > > > > > > > > > > > > > > > > =================== WITHOUT PASTIX > > =================== > > > > > > > > > > > > > > > > > > > > > > > > > > the same as above but the options "--download-pastix > > > > > > > > > > --download-hwloc" > > > > > > > > > > > > > > > > > > > > > > > > > > > > ====================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From mfadams at lbl.gov Tue Apr 7 06:44:52 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Apr 2020 07:44:52 -0400 Subject: [petsc-users] strange TS adaptivity behavior In-Reply-To: <874ktw5kx0.fsf@jedbrown.org> References: <87a73o5rbi.fsf@jedbrown.org> <874ktw5kx0.fsf@jedbrown.org> Message-ID: > > > > > If you increase resolution at differentiating a discontinuity, you'd > expect to become more and more confident that the error is large. > I am trying to use smooth functions but my first function might have had a discontinuity. I'm now just using a sin function that is perfectly smooth. In looking at this some more, the dynamics/stiffness of the system, when these (cold) sources are added, is just stiff and this behavior seems reasonable. I've found that with a dt-min of 0.1 and ts_tol 1.e-1 it works OK and the results are the same in the eyeball norm as dt-min 1e-3 and ts_tol 1.e-3. So I'm gonna call that converged. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Tue Apr 7 23:46:29 2020 From: danyang.su at gmail.com (Danyang Su) Date: Tue, 07 Apr 2020 21:46:29 -0700 Subject: [petsc-users] DMPlex partition problem Message-ID: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457827 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542677 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349308 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: basin2layer.exo Type: application/octet-stream Size: 51612 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex1f90.F90 Type: application/octet-stream Size: 4441 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 365 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: basin2layer.vtk Type: application/octet-stream Size: 55566 bytes Desc: not available URL: From karabelaselias at gmail.com Wed Apr 8 02:16:40 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Wed, 8 Apr 2020 09:16:40 +0200 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <87a7472kcb.fsf@jedbrown.org> References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> <87a7472kcb.fsf@jedbrown.org> Message-ID: <2171016d-4c41-840d-7d60-14d1a5c2bd1e@gmail.com> Dear Jed, I'm done implementing my FCT-Solver and it works fairly well on a small amount of MPI-Procs. Additionally to your little snippet I have used a VecScatterToAll. Reason is that the flux correction f_i takes the form Sum_{j} alpha_ij r_ij where r_ij is defined on the sparsity pattern of my FEM Matrix and alpha_ij is based on two vectors Rp and Rm. So basically I need off-process values of these vectors to construct alpha which I made with a VecScatterToAll. However I guess this will slow down my overall program quite significant. Any Ideas? Best regards Elias On 23/03/2020 15:53, Jed Brown wrote: > Thanks; please don't drop the list. > > I'd be curious whether this operation is common enough that we should > add it to PETSc. My hesitance has been that people may want many > different variants when working with systems of equations, for example. > > Elias Karabelas writes: > >> Dear Jed, >> >> Yes the Matrix A comes from assembling a FEM-convection-diffusion >> operator over a tetrahedral mesh. So my matrix graph should be >> symmetric. Thanks for the snippet >> >> On 23/03/2020 15:42, Jed Brown wrote: >>> Elias Karabelas writes: >>> >>>> Dear Users, >>>> >>>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>>> To this end I have amongst other things create a Matrix whose entries >>>> are given by >>>> >>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>> >>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>> >>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>>> >>>> I was wondering how to do this. My first search brought me to >>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>> >>>> >>>> but this just goes over the rows of one matrix to set new values and now >>>> I would need to run over the rows and columns of the matrix. My Idea was >>>> to just create a transpose of A and do the same but then the row-layout >>>> will be different and I can't use the same for loop for A and AT and >>>> thus also won't be able to calculate the max's above. >>> Does your matrix have symmetric nonzero structure? (It's typical for >>> finite element methods.) >>> >>> If so, all the indices will match up so I think you can do something like: >>> >>> for (row=rowstart; row>> PetscScalar Lvals[MAX_LEN]; >>> PetscInt diag; >>> MatGetRow(A, row, &ncols, &cols, &vals); >>> MatGetRow(At, row, &ncolst, &colst, &valst); >>> assert(ncols == ncolst); // symmetric structure >>> PetscScalar sum = 0; >>> for (c=0; c>> assert(cols[c] == colst[c]); // symmetric structure >>> if (cols[c] == row) diag = c; >>> else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); >>> } >>> Lvals[diag] = sum; >>> MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); >>> MatRestoreRow(A, row, &ncols, &cols, &vals); >>> MatRestoreRow(At, row, &ncolst, &colst, &valst); >>> } From knepley at gmail.com Wed Apr 8 06:25:28 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 07:25:28 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > I will figure this out by next week. Thanks, Matt > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457827 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542677 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349308 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 08:45:41 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 09:45:41 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > >> Dear All, >> >> >> >> Hope you are safe and healthy. >> >> >> >> I have a question regarding pretty different partition results of prism >> mesh. The partition in PETSc generates much more ghost nodes/cells than the >> partition in Gmsh, even though both use metis as partitioner. Attached >> please find the prism mesh in both vtk and exo format, the test code >> modified based on ex1f90 example. Similar problem are observed for larger >> dataset with more layers. >> > > I will figure this out by next week. > I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Thanks, Matt Thanks, > > Matt > > >> For example, in Gmsh, I get partition results using two processors and >> four processors as shown below, which are pretty reasonable. >> >> >> >> >> >> However, in PETSc, the partition looks a bit weird. Looks like it takes >> layer partition first and then inside layer. If the number of nodes per >> layer is very large, this kind of partitioning results into much more ghost >> nodes/cells. >> >> >> >> Anybody know how to improve the partitioning in PETSc? I have tried >> parmetis and chaco. There is no big difference between them. >> >> >> >> >> >> >> >> Thanks, >> >> >> >> Danyang >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457827 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542677 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349308 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Danyang.png Type: image/png Size: 82287 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 11:13:16 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 09:13:16 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> Message-ID: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457828 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542678 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349309 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 11:19:44 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 12:19:44 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. Thanks, Matt > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457828 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542678 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349309 bytes Desc: not available URL: From jed at jedbrown.org Wed Apr 8 11:26:13 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 08 Apr 2020 10:26:13 -0600 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <2171016d-4c41-840d-7d60-14d1a5c2bd1e@gmail.com> References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> <87a7472kcb.fsf@jedbrown.org> <2171016d-4c41-840d-7d60-14d1a5c2bd1e@gmail.com> Message-ID: <87v9mayme2.fsf@jedbrown.org> Elias Karabelas writes: > Dear Jed, > > I'm done implementing my FCT-Solver and it works fairly well on a small > amount of MPI-Procs. Additionally to your little snippet I have used a > VecScatterToAll. Can you share the code including that VecScatterToAll? There more local ways to get that information. > Reason is that the flux correction f_i takes the form Sum_{j} alpha_ij > r_ij where r_ij is defined on the sparsity pattern of my FEM Matrix and > alpha_ij is based on two vectors Rp and Rm. So basically I need > off-process values of these vectors to construct alpha which I made with > a VecScatterToAll. However I guess this will slow down my overall > program quite significant. Any Ideas? > > Best regards > > Elias > > On 23/03/2020 15:53, Jed Brown wrote: >> Thanks; please don't drop the list. >> >> I'd be curious whether this operation is common enough that we should >> add it to PETSc. My hesitance has been that people may want many >> different variants when working with systems of equations, for example. >> >> Elias Karabelas writes: >> >>> Dear Jed, >>> >>> Yes the Matrix A comes from assembling a FEM-convection-diffusion >>> operator over a tetrahedral mesh. So my matrix graph should be >>> symmetric. Thanks for the snippet >>> >>> On 23/03/2020 15:42, Jed Brown wrote: >>>> Elias Karabelas writes: >>>> >>>>> Dear Users, >>>>> >>>>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>>>> To this end I have amongst other things create a Matrix whose entries >>>>> are given by >>>>> >>>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>>> >>>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>>> >>>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>>>> >>>>> I was wondering how to do this. My first search brought me to >>>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>>> >>>>> >>>>> but this just goes over the rows of one matrix to set new values and now >>>>> I would need to run over the rows and columns of the matrix. My Idea was >>>>> to just create a transpose of A and do the same but then the row-layout >>>>> will be different and I can't use the same for loop for A and AT and >>>>> thus also won't be able to calculate the max's above. >>>> Does your matrix have symmetric nonzero structure? (It's typical for >>>> finite element methods.) >>>> >>>> If so, all the indices will match up so I think you can do something like: >>>> >>>> for (row=rowstart; row>>> PetscScalar Lvals[MAX_LEN]; >>>> PetscInt diag; >>>> MatGetRow(A, row, &ncols, &cols, &vals); >>>> MatGetRow(At, row, &ncolst, &colst, &valst); >>>> assert(ncols == ncolst); // symmetric structure >>>> PetscScalar sum = 0; >>>> for (c=0; c>>> assert(cols[c] == colst[c]); // symmetric structure >>>> if (cols[c] == row) diag = c; >>>> else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); >>>> } >>>> Lvals[diag] = sum; >>>> MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); >>>> MatRestoreRow(A, row, &ncols, &cols, &vals); >>>> MatRestoreRow(At, row, &ncolst, &colst, &valst); >>>> } From danyang.su at gmail.com Wed Apr 8 11:37:25 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 09:37:25 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> Message-ID: <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457829 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542679 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349310 bytes Desc: not available URL: From junchao.zhang at gmail.com Wed Apr 8 12:02:46 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 8 Apr 2020 12:02:46 -0500 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <2171016d-4c41-840d-7d60-14d1a5c2bd1e@gmail.com> References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> <87a7472kcb.fsf@jedbrown.org> <2171016d-4c41-840d-7d60-14d1a5c2bd1e@gmail.com> Message-ID: Hi, Elias, VecScatterToAll is implemented with MPI_Allgatherv. If not large scale, I guess it won't be a problem. I assume you want to assemble a MATMPIAIJ D with D_ij = L_ij * (u[i] - u[j]) Since D has the same sparsity pattern as L, we may have (not tested), Mat A,B; const PetscInt *garray,*cols; VecScatter vscat; PetscInt i,j,m,n,ncols; Vec ur; PetscScalar *ulocal,*uremote,val,*vals; IS from,to; ierr = MatMPIAIJGetSeqAIJ(L,&A,&B,&garray);CHKERRQ(ierr); ierr = MatGetSize(B,&NULL,&n);CHKERRQ(ierr); /* garray[]'s length = n */ ierr = VecCreateSeq(PETSC_COMM_SELF,n,&ur);CHKERRQ(ierr); /* ur stores needed off-proc entries of u */ ierr = ISCreateStride(PETSC_COMM_SELF,n,0,1,&to); ierr = ISCreateGeneral(PETSC_COMM_SELF,n,garray,PETSC_COPY_VALUES,&from);CHKERRQ(ierr); ierr = VecScatterCreate(u,from,ur,to,&vscat);CHKERRQ(ierr); /* vscat is D's Mvctx, which however is not exposed to users */ ierr = VecScatterBegin(vscat,u,ur,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); ierr = VecScatterEnd(vscat,u,ur,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); ierr = VecGetArrayRead(u,&ulocal);CHKERRQ(ierr); ierr = VecGetArrayRead(ur,&uremote);CHKERRQ(ierr); ierr = MatGetOwnershipRange(D,&rstart,NULL);CHKERRQ(ierr); ierr = MatGetOwnershipRangeColumn(D,&cstart,NULL); ierr = MatDuplicate(L,&D,MAT_DO_NOT_COPY_VALUES);CHKERRQ(ierr); ierr = MatSetOption(D,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); ierr = MatAssemblyBegin(D,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); ierr = MatGetSize(A,&m,NULL);CHKERRQ(ierr); for (i=0; i wrote: > Dear Jed, > > I'm done implementing my FCT-Solver and it works fairly well on a small > amount of MPI-Procs. Additionally to your little snippet I have used a > VecScatterToAll. > > Reason is that the flux correction f_i takes the form Sum_{j} alpha_ij > r_ij where r_ij is defined on the sparsity pattern of my FEM Matrix and > alpha_ij is based on two vectors Rp and Rm. So basically I need > off-process values of these vectors to construct alpha which I made with > a VecScatterToAll. However I guess this will slow down my overall > program quite significant. Any Ideas? > > Best regards > > Elias > > On 23/03/2020 15:53, Jed Brown wrote: > > Thanks; please don't drop the list. > > > > I'd be curious whether this operation is common enough that we should > > add it to PETSc. My hesitance has been that people may want many > > different variants when working with systems of equations, for example. > > > > Elias Karabelas writes: > > > >> Dear Jed, > >> > >> Yes the Matrix A comes from assembling a FEM-convection-diffusion > >> operator over a tetrahedral mesh. So my matrix graph should be > >> symmetric. Thanks for the snippet > >> > >> On 23/03/2020 15:42, Jed Brown wrote: > >>> Elias Karabelas writes: > >>> > >>>> Dear Users, > >>>> > >>>> I want to implement a FCT (flux corrected transport) scheme with > PETSc. > >>>> To this end I have amongst other things create a Matrix whose entries > >>>> are given by > >>>> > >>>> L_ij = -max(0, A_ij, A_ji) for i neq j > >>>> > >>>> L_ii = Sum_{j=0,..n, j neq i} L_ij > >>>> > >>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. > >>>> > >>>> I was wondering how to do this. My first search brought me to > >>>> > https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html > >>>> > >>>> > >>>> but this just goes over the rows of one matrix to set new values and > now > >>>> I would need to run over the rows and columns of the matrix. My Idea > was > >>>> to just create a transpose of A and do the same but then the > row-layout > >>>> will be different and I can't use the same for loop for A and AT and > >>>> thus also won't be able to calculate the max's above. > >>> Does your matrix have symmetric nonzero structure? (It's typical for > >>> finite element methods.) > >>> > >>> If so, all the indices will match up so I think you can do something > like: > >>> > >>> for (row=rowstart; row >>> PetscScalar Lvals[MAX_LEN]; > >>> PetscInt diag; > >>> MatGetRow(A, row, &ncols, &cols, &vals); > >>> MatGetRow(At, row, &ncolst, &colst, &valst); > >>> assert(ncols == ncolst); // symmetric structure > >>> PetscScalar sum = 0; > >>> for (c=0; c >>> assert(cols[c] == colst[c]); // symmetric structure > >>> if (cols[c] == row) diag = c; > >>> else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); > >>> } > >>> Lvals[diag] = sum; > >>> MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); > >>> MatRestoreRow(A, row, &ncols, &cols, &vals); > >>> MatRestoreRow(At, row, &ncolst, &colst, &valst); > >>> } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Wed Apr 8 12:06:04 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Wed, 8 Apr 2020 19:06:04 +0200 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: <87v9mayme2.fsf@jedbrown.org> References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> <87a7472kcb.fsf@jedbrown.org> <2171016d-4c41-840d-7d60-14d1a5c2bd1e@gmail.com> <87v9mayme2.fsf@jedbrown.org> Message-ID: Happy to, but this is far from a production code speaking of commenting and stuff :D ############################################################################## void CalculateFluxCorrection(ConvectionDiffusionOperator * convOp) { ? //convOp->R? SparseMatrix ? //convOp->MC consistent Mass Matrix ? //convOp->L? Fluxcorrected stiffnessmatrix ? //convOp->D? Helper matrix ? //convOp->ML lumped Mass matrix (Vec) ? //Zero Matrix R ? MatZeroEntries(&convOp->R); ? Vec Rpos, Rneg; ? VecDuplicate(convOp->utilde, &Rpos); ? VecDuplicate(convOp->utilde, &Rneg); ? // for later usage we need the maxima and minima of utilde ? double utildemax = 0.; ? double utildemin = 0.; ? VecMax(convOp->utilde, nullptr, &utildemax); ? VecMin(convOp->utilde, nullptr, &utildemin); ? //Create Scatter to all and globalize the vectors ? Vec _vseq, _useq; ? VecScatter toall; ? VecScatterCreateToAll(convOp->utilde,&toall,&_useq); ? VecDuplicate(_useq, &_vseq); ? VecScatterBegin(toall, convOp->utilde, _useq, INSERT_VALUES, SCATTER_FORWARD); ? VecScatterEnd(toall, convOp->utilde, _useq, INSERT_VALUES, SCATTER_FORWARD); ? VecScatterBegin(toall, convOp->v, _vseq, INSERT_VALUES, SCATTER_FORWARD); ? VecScatterEnd(toall, convOp->v, _vseq, INSERT_VALUES, SCATTER_FORWARD); ? //Values of utilde and v globalized on each rank ? const double *_useq_ptr, *_vseq_ptr; ? VecGetArrayRead(_vseq, &_vseq_ptr); ? VecGetArrayRead(_useq, &_useq_ptr); ? double *lmassloc; ? VecGetArrayRead(convOp->ML, &lmassloc); ? //Need write access to local part of Rpos and Rneg since I update the values ? double *rposloc; ? VecGetArray(Rpos, &rposloc); ? double *rnegloc; ? VecGetArray(Rneg, &rnegloc); ? //Stuff needed for assembling convOp->R ? int rstart, rend; ? int ncolsM, ncolsD, ncolsR; ? const int *colsM, *colsD, *colsR; ? const double *valsM, *valsD, *valsR; ? MatGetOwnershipRange(convOp->R,&rstart,&rend); ? const size_t sz = rend - rstart; ? std::vector Lvals; ? //R is defined as R_ij = dt * convOp->MC[i,j] * (v[i] - v[j]) - convOp->D[i,j] * (utilde[i] - utilde[j]) ? for(size_t i=0; i < sz; i++) ? { ??? int row = rstart + i; ??? int diag = 0; ??? //Fill i-th row of convOp->R ??? MatGetRow(convOp->MC, row, &ncolsM, &colsM, &valsM); ??? MatGetRow(convOp->D, row, &ncolsD, &colsD, &valsD); ??? assert(ncolsM == ncolsD); //Assert symmetric structure should be given for FEM matrices ??? Lvals.assign(ncolsM, 0.0); ??? double Pipos = 0.; ??? double Pineg = 0.; ??? for(int c=0; c < ncolsM; c++) ??? { ????? assert(colsM[c] == colsD[c]); ????? if(colsM[c] == row) ??????? diag = c; ????? else ??????? { ????????? Lvals[c] = convOp->dt * (valsM[c] * (_vseq_ptr[row] - _vseq_ptr[colsM[c]]) - valsD[c] * (_useq_ptr[row] - _useq_ptr[colsM[c]])); ????????? Pipos += std::max(0.0, Lvals[c]); ????????? Pineg += std::min(0.0, Lvals[c]); ??????? } ??? } ??? Lvals[diag] = 0.0; ??? MatSetValues(convOp->R, 1, &row, ncolsM, colsM, Lvals.data(), ADD_VALUES); ??? //Qip[i] := max(0, max_{j=0,...,N-1, j neq i}(utilde[j] - utilde[i])) ??? //Qim[i] := min(0, min_{j=0,...,N-1, j neq i}(utilde[j] - utilde[i])) ??? const double Qip = std::max(0.0, utildemax - _useq_ptr[row]); ??? const double Qim = std::min(0.0, utildemin - _useq_ptr[row]); ??? //calc R+ and R- ??? rposloc[i] = Pipos != 0 ? std::min(1.0, lmassloc[i] * Qip / Pipos) : 0.0; ??? rnegloc[i] = Pineg != 0 ? std::min(1.0, lmassloc[i] * Qim / Pineg) : 0.0; ??? MatRestoreRow(convOp->MC.m, row, &ncolsM, &colsM, &valsM); ??? MatRestoreRow(convOp->D.m, row, &ncolsD, &colsD, &valsD); ? } ? MatAssemblyBegin(convOp->R.m,MAT_FINAL_ASSEMBLY); ? MatAssemblyEnd(convOp->R.m,MAT_FINAL_ASSEMBLY); ? VecRestoreArray(Rpos, &rposloc); ? VecRestoreArray(Rneg, &rnegloc); ? VecRestoreArrayRead(convOp->ML, &lmassloc); ? VecRestoreArrayRead(_vseq, &_vseq_ptr); ? VecRestoreArrayRead(_useq, &_useq_ptr); ? // Done assembling R ? // Calculate flux correction defined via fstar[i] := Sum_j alpha[i,j] * convOp->R[i,j] ? // with alpha[i,j] = R[i,j] > 0 ? min(R+[i], R-[j]) : min(R-[i], Rp[j]) ? Vec _rpseq, _rmseq; ? VecDuplicate(_useq, &_rpseq); ? VecDuplicate(_useq, &_rmseq); ? VecDestroy(&_useq); ? VecDestroy(&_vseq); ? VecScatterBegin(toall, Rpos, _rpseq, INSERT_VALUES, SCATTER_FORWARD); ? VecScatterEnd(toall, Rpos, _rpseq, INSERT_VALUES, SCATTER_FORWARD); ? VecScatterBegin(toall, Rneg, _rmseq, INSERT_VALUES, SCATTER_FORWARD); ? VecScatterEnd(toall, Rneg, _rmseq, INSERT_VALUES, SCATTER_FORWARD); ? const double *_rpseq_ptr, *_rmseq_ptr; ? VecGetArrayRead(_rpseq, &_rpseq_ptr); ? VecGetArrayRead(_rmseq, &_rmseq_ptr); ? double *fluxcorr; ? VecGetArray(convOp->fstar, &fluxcorr); ? double aij = 0.; ? //Calculate the flux correction ? for(size_t i=0; i < sz; i++) ? { ??? int row = rstart + i; ??? MatGetRow(convOp->R.m, row, &ncolsR, &colsR, &valsR); ??? for(int c=0; c < ncolsR; c++) ??? { ????? if(colsR[c] != row) ??????? { ????????? aij = valsR[c] > 0 ? std::min(_rpseq_ptr[row], _rmseq_ptr[colsR[c]]) : std::min(_rmseq_ptr[row], _rpseq_ptr[colsR[c]]); ????????? fluxcorr[i] += aij * valsR[c]; ??????? } ??? } ??? MatRestoreRow(convOp->R.m, row, &ncolsR, &colsR, &valsR); ? } ? VecRestoreArray(convOp->fstarm &fluxcorr); ? VecRestoreArrayRead(_rpseq, &_rpseq_ptr); ? VecRestoreArrayRead(_rmseq, &_rmseq_ptr); ? VecDestroy(&_rpseq); ? VecDestroy(&_rmseq); ? //Cleanup ? VecDestroy(Rpos); ? VecDestroy(Rneg); ? VecScatterDestroy(&toall); } On 08/04/2020 18:26, Jed Brown wrote: > Elias Karabelas writes: > >> Dear Jed, >> >> I'm done implementing my FCT-Solver and it works fairly well on a small >> amount of MPI-Procs. Additionally to your little snippet I have used a >> VecScatterToAll. > Can you share the code including that VecScatterToAll? > > There more local ways to get that information. > >> Reason is that the flux correction f_i takes the form Sum_{j} alpha_ij >> r_ij where r_ij is defined on the sparsity pattern of my FEM Matrix and >> alpha_ij is based on two vectors Rp and Rm. So basically I need >> off-process values of these vectors to construct alpha which I made with >> a VecScatterToAll. However I guess this will slow down my overall >> program quite significant. Any Ideas? >> >> Best regards >> >> Elias >> >> On 23/03/2020 15:53, Jed Brown wrote: >>> Thanks; please don't drop the list. >>> >>> I'd be curious whether this operation is common enough that we should >>> add it to PETSc. My hesitance has been that people may want many >>> different variants when working with systems of equations, for example. >>> >>> Elias Karabelas writes: >>> >>>> Dear Jed, >>>> >>>> Yes the Matrix A comes from assembling a FEM-convection-diffusion >>>> operator over a tetrahedral mesh. So my matrix graph should be >>>> symmetric. Thanks for the snippet >>>> >>>> On 23/03/2020 15:42, Jed Brown wrote: >>>>> Elias Karabelas writes: >>>>> >>>>>> Dear Users, >>>>>> >>>>>> I want to implement a FCT (flux corrected transport) scheme with PETSc. >>>>>> To this end I have amongst other things create a Matrix whose entries >>>>>> are given by >>>>>> >>>>>> L_ij = -max(0, A_ij, A_ji) for i neq j >>>>>> >>>>>> L_ii = Sum_{j=0,..n, j neq i} L_ij >>>>>> >>>>>> where Mat A is an (non-symmetric) Input Matrix created beforehand. >>>>>> >>>>>> I was wondering how to do this. My first search brought me to >>>>>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html >>>>>> >>>>>> >>>>>> but this just goes over the rows of one matrix to set new values and now >>>>>> I would need to run over the rows and columns of the matrix. My Idea was >>>>>> to just create a transpose of A and do the same but then the row-layout >>>>>> will be different and I can't use the same for loop for A and AT and >>>>>> thus also won't be able to calculate the max's above. >>>>> Does your matrix have symmetric nonzero structure? (It's typical for >>>>> finite element methods.) >>>>> >>>>> If so, all the indices will match up so I think you can do something like: >>>>> >>>>> for (row=rowstart; row>>>> PetscScalar Lvals[MAX_LEN]; >>>>> PetscInt diag; >>>>> MatGetRow(A, row, &ncols, &cols, &vals); >>>>> MatGetRow(At, row, &ncolst, &colst, &valst); >>>>> assert(ncols == ncolst); // symmetric structure >>>>> PetscScalar sum = 0; >>>>> for (c=0; c>>>> assert(cols[c] == colst[c]); // symmetric structure >>>>> if (cols[c] == row) diag = c; >>>>> else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); >>>>> } >>>>> Lvals[diag] = sum; >>>>> MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); >>>>> MatRestoreRow(A, row, &ncols, &cols, &vals); >>>>> MatRestoreRow(At, row, &ncolst, &colst, &valst); >>>>> } From danyang.su at gmail.com Wed Apr 8 14:22:30 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 12:22:30 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> Message-ID: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . ?And I also changed the stencil size to 1. ??? /* get coordinates and section */ ??? ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ??? ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ??? ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ??? ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); ? ????num_nodes = iend-istart; ??? num_cells = istart; ??? ????/* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells);? If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 #????? > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 #????? > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 #????? > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 #????? > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 #????? > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 #????? > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457830 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542680 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349311 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex1.c Type: application/octet-stream Size: 55298 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 14:50:31 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 15:50:31 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: > Hi Matt, > > > > Here is something pretty interesting. I modified ex1.c file with output of > number of nodes and cells (as shown below) . And I also changed the > stencil size to 1. > > > > /* get coordinates and section */ > > ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); > > ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); > > ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); > > ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); > > > > num_nodes = iend-istart; > > num_cells = istart; > > > > /* Output rank and processor information */ > > printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, > num_nodes, num_cells); > > > > > > If I compile the code using ?make ex1? and then run the test using ?mpiexec > -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the > modified ex1f90 code I sent. > > *?* *tests* mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 2, num_nodes 699, num_cess 824 > > rank 0: of nprcs: 2, num_nodes 699, num_cess 824 > Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Thanks, Matt > *?* *tests* mpiexec -n 4 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 4, num_nodes 432, num_cess 486 > > rank 0: of nprcs: 4, num_nodes 405, num_cess 448 > > rank 2: of nprcs: 4, num_nodes 411, num_cess 464 > > rank 3: of nprcs: 4, num_nodes 420, num_cess 466 > > > > However, if I compile and run the code using the script you shared, I get > reasonable results. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 > > # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 > > > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 > > # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 > > # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 > > # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 > > # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 > > > > Is there some difference in compiling or runtime options that cause the > difference? Would you please check if you can reproduce the same problem > using the modified ex1.c? > > > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 9:37 AM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 9:20 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > > > > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c > > > > I looked at your code and cannot see any difference. Also, no changes are > in master that are not in 3.13. This is very strange. > > I guess we will have to go one step at a time between the example and your > code. > > > > I will add mesh output to the ex1f90 example and check if the cell/vertex > rank is exactly the same. I wrote the mesh output myself based on the > partition but there should be no problem in that part. The number of ghost > nodes and cells is pretty easy to check. Not sure if there is any > difference between the C code and Fortran code that causes the problem. > Anyway, I will keep you updated. > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457830 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542680 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349311 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 15:26:01 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 13:26:01 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> Message-ID: <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 ?cells per layer and three layers of nodes with 233 nodes per layer. ?The number of cells looks good to me. I am confused why the same code generates pretty different partition. ?If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 #????????????? > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 #????????????? > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ?? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457831 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542681 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349312 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 15:31:58 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 16:31:58 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 12:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: > > Hi Matt, > > > > Here is something pretty interesting. I modified ex1.c file with output of > number of nodes and cells (as shown below) . And I also changed the > stencil size to 1. > > > > /* get coordinates and section */ > > ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); > > ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); > > ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); > > ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); > > > > num_nodes = iend-istart; > > num_cells = istart; > > > > /* Output rank and processor information */ > > printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, > num_nodes, num_cells); > > > > > > If I compile the code using ?make ex1? and then run the test using ?mpiexec > -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the > modified ex1f90 code I sent. > > *?* *tests* mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 2, num_nodes 699, num_cess 824 > > rank 0: of nprcs: 2, num_nodes 699, num_cess 824 > > Ah, I was not looking closely. You are asking for a cell overlap of 1 in > the partition. That is why these numbers sum to more than > > the total in the mesh. Do you want a cell overlap of 1? > > > > Yes, I need cell overlap of 1 in some circumstance. The mesh has two > layers of cells with 412 cells per layer and three layers of nodes with > 233 nodes per layer. The number of cells looks good to me. I am confused > why the same code generates pretty different partition. If I set the > stencil to 0, I get following results. The first method looks good and the > second one is not a good choice, with much more number of ghost nodes. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 > > # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 > > > > ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 0: of nprcs: 2, num_nodes 466, num_cess 412 > > rank 1: of nprcs: 2, num_nodes 466, num_cess 412 > I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt > Thanks, > > Danyang > > > > > > Thanks, > > > > Matt > > *?* *tests* mpiexec -n 4 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 4, num_nodes 432, num_cess 486 > > rank 0: of nprcs: 4, num_nodes 405, num_cess 448 > > rank 2: of nprcs: 4, num_nodes 411, num_cess 464 > > rank 3: of nprcs: 4, num_nodes 420, num_cess 466 > > > > However, if I compile and run the code using the script you shared, I get > reasonable results. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 > > # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 > > > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 > > # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 > > # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 > > # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 > > # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 > > > > Is there some difference in compiling or runtime options that cause the > difference? Would you please check if you can reproduce the same problem > using the modified ex1.c? > > > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 9:37 AM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 9:20 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > > > > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c > > > > I looked at your code and cannot see any difference. Also, no changes are > in master that are not in 3.13. This is very strange. > > I guess we will have to go one step at a time between the example and your > code. > > > > I will add mesh output to the ex1f90 example and check if the cell/vertex > rank is exactly the same. I wrote the mesh output myself based on the > partition but there should be no problem in that part. The number of ghost > nodes and cells is pretty easy to check. Not sure if there is any > difference between the C code and Fortran code that causes the problem. > Anyway, I will keep you updated. > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 457831 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 542681 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 349312 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 15:50:18 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 13:50:18 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> Message-ID: Hi Matt, Here is what I get using ex1c with stencil 0. There is no change in the source code. I just compile and run the code in different ways. By using ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make ex1? and then run the code using ?mpiexec -n ??, the partition does not looks good. My code has the same problem as this one if I use prism mesh. I just wonder what makes this difference, even without overlap. Thanks, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 1:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 cells per layer and three layers of nodes with 233 nodes per layer. The number of cells looks good to me. I am confused why the same code generates pretty different partition. If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 281265 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 457832 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 542682 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 349313 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 16:12:18 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 14:12:18 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> Message-ID: Hi Matt, Attached is another prism mesh using 8 processors. The partition of the lower mesh does not looks good. Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 1:50 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Here is what I get using ex1c with stencil 0. There is no change in the source code. I just compile and run the code in different ways. By using ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make ex1? and then run the code using ?mpiexec -n ??, the partition does not looks good. My code has the same problem as this one if I use prism mesh. I just wonder what makes this difference, even without overlap. Thanks, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 1:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 cells per layer and three layers of nodes with 233 nodes per layer. The number of cells looks good to me. I am confused why the same code generates pretty different partition. If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404235 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457833 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542683 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349314 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo.exo Type: application/octet-stream Size: 4324224 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 16:52:07 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 14:52:07 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> Message-ID: <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> Hi Matt, I am one step closer now. When run the ex1 code with ?-interpolate?, the partition is good, without it, it?s weird. Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 2:12 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Attached is another prism mesh using 8 processors. The partition of the lower mesh does not looks good. Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 1:50 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Here is what I get using ex1c with stencil 0. There is no change in the source code. I just compile and run the code in different ways. By using ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make ex1? and then run the code using ?mpiexec -n ??, the partition does not looks good. My code has the same problem as this one if I use prism mesh. I just wonder what makes this difference, even without overlap. Thanks, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 1:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 cells per layer and three layers of nodes with 233 nodes per layer. The number of cells looks good to me. I am confused why the same code generates pretty different partition. If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404236 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457834 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542684 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349315 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 18:41:44 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 19:41:44 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: > Hi Matt, > > > > I am one step closer now. When run the ex1 code with ?-interpolate?, the > partition is good, without it, it?s weird. > Crap! That did not even occur to me. Yes, the dual graph construction will not work for uninterpolated wedges. So, do you really need an uninterpolated mesh? If so, I can put it on the buglist. Thanks, Matt > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 2:12 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Attached is another prism mesh using 8 processors. The partition of the > lower mesh does not looks good. > > > > > > Thanks, > > > > Danyang > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 1:50 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Here is what I get using ex1c with stencil 0. There is no change in the > source code. I just compile and run the code in different ways. By using > ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make > ex1? and then run the code using ?mpiexec -n ??, the partition does not > looks good. My code has the same problem as this one if I use prism mesh. > > > > I just wonder what makes this difference, even without overlap. > > > > > > Thanks, > > > > Danyang > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 1:32 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 12:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: > > Hi Matt, > > > > Here is something pretty interesting. I modified ex1.c file with output of > number of nodes and cells (as shown below) . And I also changed the > stencil size to 1. > > > > /* get coordinates and section */ > > ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); > > ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); > > ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); > > ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); > > > > num_nodes = iend-istart; > > num_cells = istart; > > > > /* Output rank and processor information */ > > printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, > num_nodes, num_cells); > > > > > > If I compile the code using ?make ex1? and then run the test using ?mpiexec > -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the > modified ex1f90 code I sent. > > *?* *tests* mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 2, num_nodes 699, num_cess 824 > > rank 0: of nprcs: 2, num_nodes 699, num_cess 824 > > Ah, I was not looking closely. You are asking for a cell overlap of 1 in > the partition. That is why these numbers sum to more than > > the total in the mesh. Do you want a cell overlap of 1? > > > > Yes, I need cell overlap of 1 in some circumstance. The mesh has two > layers of cells with 412 cells per layer and three layers of nodes with > 233 nodes per layer. The number of cells looks good to me. I am confused > why the same code generates pretty different partition. If I set the > stencil to 0, I get following results. The first method looks good and the > second one is not a good choice, with much more number of ghost nodes. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 > > # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 > > > > ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 0: of nprcs: 2, num_nodes 466, num_cess 412 > > rank 1: of nprcs: 2, num_nodes 466, num_cess 412 > > > > I think this might just be a confusion over interpretation. Here is how > partitioning works: > > > > 1) We partition the mesh cells using ParMetis, Chaco, etc. > > > > 2) We move those cells (and closures) to the correct processes > > > > 3) If you ask for overlap, we mark a layer of adjacent cells on remote > processes and move them to each process > > > > The original partitions are the same, Then we add extra cells, and their > closures, to each partition. This is what you are asking for. > > You would get the same answer with GMsh if it gave you an overlap region. > > > > Thanks, > > > > Matt > > > > Thanks, > > Danyang > > > > > > Thanks, > > > > Matt > > *?* *tests* mpiexec -n 4 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 4, num_nodes 432, num_cess 486 > > rank 0: of nprcs: 4, num_nodes 405, num_cess 448 > > rank 2: of nprcs: 4, num_nodes 411, num_cess 464 > > rank 3: of nprcs: 4, num_nodes 420, num_cess 466 > > > > However, if I compile and run the code using the script you shared, I get > reasonable results. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 > > # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 > > > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 > > # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 > > # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 > > # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 > > # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 > > > > Is there some difference in compiling or runtime options that cause the > difference? Would you please check if you can reproduce the same problem > using the modified ex1.c? > > > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 9:37 AM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 9:20 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > > > > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c > > > > I looked at your code and cannot see any difference. Also, no changes are > in master that are not in 3.13. This is very strange. > > I guess we will have to go one step at a time between the example and your > code. > > > > I will add mesh output to the ex1f90 example and check if the cell/vertex > rank is exactly the same. I wrote the mesh output myself based on the > partition but there should be no problem in that part. The number of ghost > nodes and cells is pretty easy to check. Not sure if there is any > difference between the C code and Fortran code that causes the problem. > Anyway, I will keep you updated. > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404236 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457834 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542684 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349315 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 18:47:38 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 16:47:38 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> Message-ID: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 4:41 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: Hi Matt, I am one step closer now. When run the ex1 code with ?-interpolate?, the partition is good, without it, it?s weird. Crap! That did not even occur to me. Yes, the dual graph construction will not work for uninterpolated wedges. So, do you really need an uninterpolated mesh? If so, I can put it on the buglist. For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, the partition is pretty good without interpolate. That?s why I didn?t have problem for all my previous simulations using the other cell types. Thanks, Danyang Thanks, Matt Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 2:12 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Attached is another prism mesh using 8 processors. The partition of the lower mesh does not looks good. Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 1:50 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Here is what I get using ex1c with stencil 0. There is no change in the source code. I just compile and run the code in different ways. By using ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make ex1? and then run the code using ?mpiexec -n ??, the partition does not looks good. My code has the same problem as this one if I use prism mesh. I just wonder what makes this difference, even without overlap. Thanks, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 1:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 cells per layer and three layers of nodes with 233 nodes per layer. The number of cells looks good to me. I am confused why the same code generates pretty different partition. If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404237 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457835 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542685 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349316 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 18:50:32 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 19:50:32 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 7:47 PM Danyang Su wrote: > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 4:41 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: > > Hi Matt, > > > > I am one step closer now. When run the ex1 code with ?-interpolate?, the > partition is good, without it, it?s weird. > > Crap! That did not even occur to me. Yes, the dual graph construction > will not work for uninterpolated wedges. > > So, do you really need an uninterpolated mesh? If so, I can put it on the > buglist. > > > > For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, > the partition is pretty good without interpolate. That?s why I didn?t have > problem for all my previous simulations using the other cell types. > What I mean is, are you avoiding interpolating the mesh for memory? The amount of memory is usually small compared to fields on the mesh. Thanks, Matt > Thanks, > > > > Danyang > > > > Thanks, > > > > Matt > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 2:12 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Attached is another prism mesh using 8 processors. The partition of the > lower mesh does not looks good. > > > > > > Thanks, > > > > Danyang > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 1:50 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Here is what I get using ex1c with stencil 0. There is no change in the > source code. I just compile and run the code in different ways. By using > ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make > ex1? and then run the code using ?mpiexec -n ??, the partition does not > looks good. My code has the same problem as this one if I use prism mesh. > > > > I just wonder what makes this difference, even without overlap. > > > > > > Thanks, > > > > Danyang > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 1:32 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 12:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: > > Hi Matt, > > > > Here is something pretty interesting. I modified ex1.c file with output of > number of nodes and cells (as shown below) . And I also changed the > stencil size to 1. > > > > /* get coordinates and section */ > > ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); > > ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); > > ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); > > ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); > > > > num_nodes = iend-istart; > > num_cells = istart; > > > > /* Output rank and processor information */ > > printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, > num_nodes, num_cells); > > > > > > If I compile the code using ?make ex1? and then run the test using ?mpiexec > -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the > modified ex1f90 code I sent. > > *?* *tests* mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 2, num_nodes 699, num_cess 824 > > rank 0: of nprcs: 2, num_nodes 699, num_cess 824 > > Ah, I was not looking closely. You are asking for a cell overlap of 1 in > the partition. That is why these numbers sum to more than > > the total in the mesh. Do you want a cell overlap of 1? > > > > Yes, I need cell overlap of 1 in some circumstance. The mesh has two > layers of cells with 412 cells per layer and three layers of nodes with > 233 nodes per layer. The number of cells looks good to me. I am confused > why the same code generates pretty different partition. If I set the > stencil to 0, I get following results. The first method looks good and the > second one is not a good choice, with much more number of ghost nodes. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 > > # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 > > > > ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 0: of nprcs: 2, num_nodes 466, num_cess 412 > > rank 1: of nprcs: 2, num_nodes 466, num_cess 412 > > > > I think this might just be a confusion over interpretation. Here is how > partitioning works: > > > > 1) We partition the mesh cells using ParMetis, Chaco, etc. > > > > 2) We move those cells (and closures) to the correct processes > > > > 3) If you ask for overlap, we mark a layer of adjacent cells on remote > processes and move them to each process > > > > The original partitions are the same, Then we add extra cells, and their > closures, to each partition. This is what you are asking for. > > You would get the same answer with GMsh if it gave you an overlap region. > > > > Thanks, > > > > Matt > > > > Thanks, > > Danyang > > > > > > Thanks, > > > > Matt > > *?* *tests* mpiexec -n 4 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 4, num_nodes 432, num_cess 486 > > rank 0: of nprcs: 4, num_nodes 405, num_cess 448 > > rank 2: of nprcs: 4, num_nodes 411, num_cess 464 > > rank 3: of nprcs: 4, num_nodes 420, num_cess 466 > > > > However, if I compile and run the code using the script you shared, I get > reasonable results. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 > > # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 > > > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 > > # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 > > # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 > > # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 > > # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 > > > > Is there some difference in compiling or runtime options that cause the > difference? Would you please check if you can reproduce the same problem > using the modified ex1.c? > > > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 9:37 AM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 9:20 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > > > > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c > > > > I looked at your code and cannot see any difference. Also, no changes are > in master that are not in 3.13. This is very strange. > > I guess we will have to go one step at a time between the example and your > code. > > > > I will add mesh output to the ex1f90 example and check if the cell/vertex > rank is exactly the same. I wrote the mesh output myself based on the > partition but there should be no problem in that part. The number of ghost > nodes and cells is pretty easy to check. Not sure if there is any > difference between the C code and Fortran code that causes the problem. > Anyway, I will keep you updated. > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404237 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457835 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542685 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349316 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 19:18:18 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 17:18:18 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> Message-ID: <34F1B090-F3B7-4721-BFF2-7F8EE02B6CD6@gmail.com> From: Matthew Knepley Date: Wednesday, April 8, 2020 at 4:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:47 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 4:41 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: Hi Matt, I am one step closer now. When run the ex1 code with ?-interpolate?, the partition is good, without it, it?s weird. Crap! That did not even occur to me. Yes, the dual graph construction will not work for uninterpolated wedges. So, do you really need an uninterpolated mesh? If so, I can put it on the buglist. For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, the partition is pretty good without interpolate. That?s why I didn?t have problem for all my previous simulations using the other cell types. What I mean is, are you avoiding interpolating the mesh for memory? The amount of memory is usually small compared to fields on the mesh. No, not because of memory consumption problem. When the code was first written several years ago, I just put interpolate = false there. Now after setting interpolate = true, I need to update the code in setting cell-node index (array cell). The following code does not work anymore when interpolate = true. There is some code that is not well written and it needs to be improved. ??????!c add local to global cell id mapping ????? do ipoint = 0, istart-1 ??????? icell = ipoint + 1 ??????? call DMPlexGetCone(dmda_flow%da,ipoint,cone,ierr) ??????? CHKERRQ(ierr) ??????? do ivtex = 1, num_nodes_per_cell ????????? cell_node_idx(:,ipoint+1) = cone - istart + 1 ??????? end do ??????? call DMPlexRestoreCone(dmda_flow%da,ipoint,cone,ierr) ??????? CHKERRQ(ierr) ????? end do Thanks, Danyang Thanks, Matt Thanks, Danyang Thanks, Matt Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 2:12 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Attached is another prism mesh using 8 processors. The partition of the lower mesh does not looks good. Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 1:50 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Here is what I get using ex1c with stencil 0. There is no change in the source code. I just compile and run the code in different ways. By using ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make ex1? and then run the code using ?mpiexec -n ??, the partition does not looks good. My code has the same problem as this one if I use prism mesh. I just wonder what makes this difference, even without overlap. Thanks, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 1:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 cells per layer and three layers of nodes with 233 nodes per layer. The number of cells looks good to me. I am confused why the same code generates pretty different partition. If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404238 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281269 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457836 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542686 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349317 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 19:32:18 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 20:32:18 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: <34F1B090-F3B7-4721-BFF2-7F8EE02B6CD6@gmail.com> References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> <34F1B090-F3B7-4721-BFF2-7F8EE02B6CD6@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 8:18 PM Danyang Su wrote: > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 4:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:47 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 4:41 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: > > Hi Matt, > > > > I am one step closer now. When run the ex1 code with ?-interpolate?, the > partition is good, without it, it?s weird. > > Crap! That did not even occur to me. Yes, the dual graph construction > will not work for uninterpolated wedges. > > So, do you really need an uninterpolated mesh? If so, I can put it on the > buglist. > > > > For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, > the partition is pretty good without interpolate. That?s why I didn?t have > problem for all my previous simulations using the other cell types. > > > > What I mean is, are you avoiding interpolating the mesh for memory? The > amount of memory is usually small compared to > > fields on the mesh. > > > > No, not because of memory consumption problem. When the code was first > written several years ago, I just put interpolate = false there. Now after > setting interpolate = true, I need to update the code in setting cell-node > index (array cell). The following code does not work anymore when > interpolate = true. There is some code that is not well written and it > needs to be improved. > > *!c add local to global cell id mapping* > > do ipoint = 0, istart-1 > > icell = ipoint + 1 > > call DMPlexGetCone(dmda_flow%da,ipoint,cone,ierr) > > CHKERRQ(ierr) > > do ivtex = 1, num_nodes_per_cell > > cell_node_idx(:,ipoint+1) = cone - istart + 1 > > end do > > call DMPlexRestoreCone(dmda_flow%da,ipoint,cone,ierr) > > CHKERRQ(ierr) > > end do > > My F90 is a bit shaky, but I think you want PetscInt, pointer :: nClosure(:) do ipoint = 0, istart-1 icell = ipoint + 1 call DMPlexGetClosure(dmda_flow%da,ipoint,cone,ierr);CHKERRQ(ierr) call DMPlexGetTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) ivtex = 0 do icl = 0,len(nClosure)-1,2 if ((nClosure(icl) >= vStart) .and. (nClosure(icl) < vEnd)) then cell_node_idx(ivtex,ipoint+1) = nClosure(icl) - istart + 1 ivtex = ivtex + 1 end if end do call DMPlexRestoreTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) end do Basically, you use the closure, and filter out everything that is not a vertex. Thanks, Matt > Thanks, > > > > Danyang > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Danyang > > > > Thanks, > > > > Matt > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 2:12 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Attached is another prism mesh using 8 processors. The partition of the > lower mesh does not looks good. > > > > > > Thanks, > > > > Danyang > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 1:50 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Here is what I get using ex1c with stencil 0. There is no change in the > source code. I just compile and run the code in different ways. By using > ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make > ex1? and then run the code using ?mpiexec -n ??, the partition does not > looks good. My code has the same problem as this one if I use prism mesh. > > > > I just wonder what makes this difference, even without overlap. > > > > > > Thanks, > > > > Danyang > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 1:32 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 12:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: > > Hi Matt, > > > > Here is something pretty interesting. I modified ex1.c file with output of > number of nodes and cells (as shown below) . And I also changed the > stencil size to 1. > > > > /* get coordinates and section */ > > ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); > > ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); > > ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); > > ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); > > > > num_nodes = iend-istart; > > num_cells = istart; > > > > /* Output rank and processor information */ > > printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, > num_nodes, num_cells); > > > > > > If I compile the code using ?make ex1? and then run the test using ?mpiexec > -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the > modified ex1f90 code I sent. > > *?* *tests* mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 2, num_nodes 699, num_cess 824 > > rank 0: of nprcs: 2, num_nodes 699, num_cess 824 > > Ah, I was not looking closely. You are asking for a cell overlap of 1 in > the partition. That is why these numbers sum to more than > > the total in the mesh. Do you want a cell overlap of 1? > > > > Yes, I need cell overlap of 1 in some circumstance. The mesh has two > layers of cells with 412 cells per layer and three layers of nodes with > 233 nodes per layer. The number of cells looks good to me. I am confused > why the same code generates pretty different partition. If I set the > stencil to 0, I get following results. The first method looks good and the > second one is not a good choice, with much more number of ghost nodes. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 > > # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 > > > > ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 0: of nprcs: 2, num_nodes 466, num_cess 412 > > rank 1: of nprcs: 2, num_nodes 466, num_cess 412 > > > > I think this might just be a confusion over interpretation. Here is how > partitioning works: > > > > 1) We partition the mesh cells using ParMetis, Chaco, etc. > > > > 2) We move those cells (and closures) to the correct processes > > > > 3) If you ask for overlap, we mark a layer of adjacent cells on remote > processes and move them to each process > > > > The original partitions are the same, Then we add extra cells, and their > closures, to each partition. This is what you are asking for. > > You would get the same answer with GMsh if it gave you an overlap region. > > > > Thanks, > > > > Matt > > > > Thanks, > > Danyang > > > > > > Thanks, > > > > Matt > > *?* *tests* mpiexec -n 4 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 4, num_nodes 432, num_cess 486 > > rank 0: of nprcs: 4, num_nodes 405, num_cess 448 > > rank 2: of nprcs: 4, num_nodes 411, num_cess 464 > > rank 3: of nprcs: 4, num_nodes 420, num_cess 466 > > > > However, if I compile and run the code using the script you shared, I get > reasonable results. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 > > # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 > > > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 > > # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 > > # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 > > # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 > > # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 > > > > Is there some difference in compiling or runtime options that cause the > difference? Would you please check if you can reproduce the same problem > using the modified ex1.c? > > > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 9:37 AM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 9:20 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > > > > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c > > > > I looked at your code and cannot see any difference. Also, no changes are > in master that are not in 3.13. This is very strange. > > I guess we will have to go one step at a time between the example and your > code. > > > > I will add mesh output to the ex1f90 example and check if the cell/vertex > rank is exactly the same. I wrote the mesh output myself based on the > partition but there should be no problem in that part. The number of ghost > nodes and cells is pretty easy to check. Not sure if there is any > difference between the C code and Fortran code that causes the problem. > Anyway, I will keep you updated. > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404238 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281269 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457836 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542686 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349317 bytes Desc: not available URL: From danyang.su at gmail.com Wed Apr 8 19:46:14 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 17:46:14 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> <34F1B090-F3B7-4721-BFF2-7F8EE02B6CD6@gmail.com> Message-ID: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 5:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 8:18 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 4:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:47 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 4:41 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: Hi Matt, I am one step closer now. When run the ex1 code with ?-interpolate?, the partition is good, without it, it?s weird. Crap! That did not even occur to me. Yes, the dual graph construction will not work for uninterpolated wedges. So, do you really need an uninterpolated mesh? If so, I can put it on the buglist. For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, the partition is pretty good without interpolate. That?s why I didn?t have problem for all my previous simulations using the other cell types. What I mean is, are you avoiding interpolating the mesh for memory? The amount of memory is usually small compared to fields on the mesh. No, not because of memory consumption problem. When the code was first written several years ago, I just put interpolate = false there. Now after setting interpolate = true, I need to update the code in setting cell-node index (array cell). The following code does not work anymore when interpolate = true. There is some code that is not well written and it needs to be improved. !c add local to global cell id mapping do ipoint = 0, istart-1 icell = ipoint + 1 call DMPlexGetCone(dmda_flow%da,ipoint,cone,ierr) CHKERRQ(ierr) do ivtex = 1, num_nodes_per_cell cell_node_idx(:,ipoint+1) = cone - istart + 1 end do call DMPlexRestoreCone(dmda_flow%da,ipoint,cone,ierr) CHKERRQ(ierr) end do My F90 is a bit shaky, but I think you want PetscInt, pointer :: nClosure(:) do ipoint = 0, istart-1 icell = ipoint + 1 call DMPlexGetClosure(dmda_flow%da,ipoint,cone,ierr);CHKERRQ(ierr) call DMPlexGetTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) ivtex = 0 do icl = 0,len(nClosure)-1,2 if ((nClosure(icl) >= vStart) .and. (nClosure(icl) < vEnd)) then cell_node_idx(ivtex,ipoint+1) = nClosure(icl) - istart + 1 ivtex = ivtex + 1 end if end do call DMPlexRestoreTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) end do Basically, you use the closure, and filter out everything that is not a vertex. Thanks, Matt. Would you mind give some tips on the following code as well. The last line of following section (num_cells_loc = num_cells+num_nodes-nleaves-num_nodes_loc) does not work either when interpolate == true. ????? !c get local mesh DM ????? call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) ????? CHKERRQ(ierr) ????? call DMGetCoordinateDM(dmda_flow%da,cda,ierr) ????? CHKERRQ(ierr) ????? call DMGetSection(cda,cs,ierr) ????? CHKERRQ(ierr) ????? call PetscSectionGetChart(cs,istart,iend,ierr) ????? CHKERRQ(ierr) ????? !c Calculate number of nodes/cells with ghost nodes/cells for each processor ????? num_nodes = iend-istart ????? num_cells = istart ????? ??????!c Calculate local number of nodes without ghost nodes ????? num_nodes_loc = 0 ????? do ipoint = istart, iend-1 ??????? call DMPlexGetPointGlobal(cda,ipoint,pstart,pend,ierr) ??????? CHKERRQ(ierr) ??????? if (pend >= 0) then ????????? num_nodes_loc = num_nodes_loc + 1 ??????? end if ????? end do ????? ??????!c Calculate number of cells without ghost cells for each processor ????? call DMGetPointSF(dmda_flow%da,sf,ierr) ????? CHKERRQ(ierr) ????? ??????call PetscSFGetGraph(sf,nroots,nleaves,gmine,gremote,ierr) ????? CHKERRQ(ierr)????? ?????? ??????!!!!!This calculation is correct when interpolate == false!!!!! ????? !!!!!??? but? incorrect when interpolate == true????????? !!!!! ????? num_cells_loc = num_cells+num_nodes-nleaves-num_nodes_loc Thanks, Danyang Thanks, Matt Thanks, Danyang Thanks, Matt Thanks, Danyang Thanks, Matt Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 2:12 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Attached is another prism mesh using 8 processors. The partition of the lower mesh does not looks good. Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 1:50 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Here is what I get using ex1c with stencil 0. There is no change in the source code. I just compile and run the code in different ways. By using ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make ex1? and then run the code using ?mpiexec -n ??, the partition does not looks good. My code has the same problem as this one if I use prism mesh. I just wonder what makes this difference, even without overlap. Thanks, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 1:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 cells per layer and three layers of nodes with 233 nodes per layer. The number of cells looks good to me. I am confused why the same code generates pretty different partition. If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404239 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281270 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457837 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542687 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349318 bytes Desc: not available URL: From knepley at gmail.com Wed Apr 8 19:54:59 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2020 20:54:59 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> <34F1B090-F3B7-4721-BFF2-7F8EE02B6CD6@gmail.com> Message-ID: On Wed, Apr 8, 2020 at 8:46 PM Danyang Su wrote: > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 5:32 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 8:18 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 4:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:47 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 4:41 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: > > Hi Matt, > > > > I am one step closer now. When run the ex1 code with ?-interpolate?, the > partition is good, without it, it?s weird. > > Crap! That did not even occur to me. Yes, the dual graph construction > will not work for uninterpolated wedges. > > So, do you really need an uninterpolated mesh? If so, I can put it on the > buglist. > > > > For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, > the partition is pretty good without interpolate. That?s why I didn?t have > problem for all my previous simulations using the other cell types. > > > > What I mean is, are you avoiding interpolating the mesh for memory? The > amount of memory is usually small compared to > > fields on the mesh. > > > > No, not because of memory consumption problem. When the code was first > written several years ago, I just put interpolate = false there. Now after > setting interpolate = true, I need to update the code in setting cell-node > index (array cell). The following code does not work anymore when > interpolate = true. There is some code that is not well written and it > needs to be improved. > > *!c add local to global cell id mapping* > > do ipoint = 0, istart-1 > > icell = ipoint + 1 > > call DMPlexGetCone(dmda_flow%da,ipoint,cone,ierr) > > CHKERRQ(ierr) > > do ivtex = 1, num_nodes_per_cell > > cell_node_idx(:,ipoint+1) = cone - istart + 1 > > end do > > call DMPlexRestoreCone(dmda_flow%da,ipoint,cone,ierr) > > CHKERRQ(ierr) > > end do > > > > > > My F90 is a bit shaky, but I think you want > > > > PetscInt, pointer :: nClosure(:) > > do ipoint = 0, istart-1 > icell = ipoint + 1 > call DMPlexGetClosure(dmda_flow%da,ipoint,cone,ierr);CHKERRQ(ierr) > call > DMPlexGetTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) > ivtex = 0 > do icl = 0,len(nClosure)-1,2 > if ((nClosure(icl) >= vStart) .and. (nClosure(icl) < vEnd)) then > cell_node_idx(ivtex,ipoint+1) = nClosure(icl) - istart + 1 > ivtex = ivtex + 1 > end if > end do > call > DMPlexRestoreTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) > end do > > > > Basically, you use the closure, and filter out everything that is not a > vertex. > > > > Thanks, Matt. Would you mind give some tips on the following code as well. > The last line of following section (num_cells_loc = num_cells+num_nodes- > nleaves-num_nodes_loc) does not work either when *interpolate == true.* > Sure. The reason that it does not work is that it assumes the only shared things are vertices and cells. Now we also share edges and faces. I think its easier to just compute it directly: num_cells_loc = num_cells do l = 0,nleaves-1 if ((gmine(l) >= cStart) .and. (gmine(l) < cEnd)) then num_cells_loc = num_cells_loc - 1 end if end do Thanks, Matt > > > *!c get local mesh DM* > > call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) > > CHKERRQ(ierr) > > > > call DMGetCoordinateDM(dmda_flow%da,cda,ierr) > > CHKERRQ(ierr) > > > > call DMGetSection(cda,cs,ierr) > > CHKERRQ(ierr) > > > > call PetscSectionGetChart(cs,istart,iend,ierr) > > CHKERRQ(ierr) > > > > *!c Calculate number of nodes/cells with ghost nodes/cells for each > processor* > > num_nodes = iend-istart > > num_cells = istart > > > > *!c Calculate local number of nodes without ghost nodes* > > num_nodes_loc = 0 > > do ipoint = istart, iend-1 > > call DMPlexGetPointGlobal(cda,ipoint,pstart,pend,ierr) > > CHKERRQ(ierr) > > if (pend >= 0) then > > num_nodes_loc = num_nodes_loc + 1 > > end if > > end do > > > > *!c Calculate number of cells without ghost cells for each > processor* > > call DMGetPointSF(dmda_flow%da,sf,ierr) > > CHKERRQ(ierr) > > > > call PetscSFGetGraph(sf,nroots,nleaves,gmine,gremote,ierr) > > CHKERRQ(ierr) > > > > *!!!!!This calculation is correct when interpolate == false!!!!!* > > *!!!!! but incorrect when interpolate == true !!!!!* > > num_cells_loc = num_cells+num_nodes-nleaves-num_nodes_loc > > > > Thanks, > > > > Danyang > > > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Danyang > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Danyang > > > > Thanks, > > > > Matt > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 2:12 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Attached is another prism mesh using 8 processors. The partition of the > lower mesh does not looks good. > > > > > > Thanks, > > > > Danyang > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 1:50 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Here is what I get using ex1c with stencil 0. There is no change in the > source code. I just compile and run the code in different ways. By using > ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make > ex1? and then run the code using ?mpiexec -n ??, the partition does not > looks good. My code has the same problem as this one if I use prism mesh. > > > > I just wonder what makes this difference, even without overlap. > > > > > > Thanks, > > > > Danyang > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 1:32 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 12:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: > > Hi Matt, > > > > Here is something pretty interesting. I modified ex1.c file with output of > number of nodes and cells (as shown below) . And I also changed the > stencil size to 1. > > > > /* get coordinates and section */ > > ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); > > ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); > > ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); > > ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); > > > > num_nodes = iend-istart; > > num_cells = istart; > > > > /* Output rank and processor information */ > > printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, > num_nodes, num_cells); > > > > > > If I compile the code using ?make ex1? and then run the test using ?mpiexec > -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the > modified ex1f90 code I sent. > > *?* *tests* mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 2, num_nodes 699, num_cess 824 > > rank 0: of nprcs: 2, num_nodes 699, num_cess 824 > > Ah, I was not looking closely. You are asking for a cell overlap of 1 in > the partition. That is why these numbers sum to more than > > the total in the mesh. Do you want a cell overlap of 1? > > > > Yes, I need cell overlap of 1 in some circumstance. The mesh has two > layers of cells with 412 cells per layer and three layers of nodes with > 233 nodes per layer. The number of cells looks good to me. I am confused > why the same code generates pretty different partition. If I set the > stencil to 0, I get following results. The first method looks good and the > second one is not a good choice, with much more number of ghost nodes. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 > > # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 > > > > ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 0: of nprcs: 2, num_nodes 466, num_cess 412 > > rank 1: of nprcs: 2, num_nodes 466, num_cess 412 > > > > I think this might just be a confusion over interpretation. Here is how > partitioning works: > > > > 1) We partition the mesh cells using ParMetis, Chaco, etc. > > > > 2) We move those cells (and closures) to the correct processes > > > > 3) If you ask for overlap, we mark a layer of adjacent cells on remote > processes and move them to each process > > > > The original partitions are the same, Then we add extra cells, and their > closures, to each partition. This is what you are asking for. > > You would get the same answer with GMsh if it gave you an overlap region. > > > > Thanks, > > > > Matt > > > > Thanks, > > Danyang > > > > > > Thanks, > > > > Matt > > *?* *tests* mpiexec -n 4 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 4, num_nodes 432, num_cess 486 > > rank 0: of nprcs: 4, num_nodes 405, num_cess 448 > > rank 2: of nprcs: 4, num_nodes 411, num_cess 464 > > rank 3: of nprcs: 4, num_nodes 420, num_cess 466 > > > > However, if I compile and run the code using the script you shared, I get > reasonable results. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 > > # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 > > > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 > > # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 > > # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 > > # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 > > # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 > > > > Is there some difference in compiling or runtime options that cause the > difference? Would you please check if you can reproduce the same problem > using the modified ex1.c? > > > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 9:37 AM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 9:20 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > > > > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c > > > > I looked at your code and cannot see any difference. Also, no changes are > in master that are not in 3.13. This is very strange. > > I guess we will have to go one step at a time between the example and your > code. > > > > I will add mesh output to the ex1f90 example and check if the cell/vertex > rank is exactly the same. I wrote the mesh output myself based on the > partition but there should be no problem in that part. The number of ghost > nodes and cells is pretty easy to check. Not sure if there is any > difference between the C code and Fortran code that causes the problem. > Anyway, I will keep you updated. > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404239 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281270 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457837 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542687 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349318 bytes Desc: not available URL: From danyang.su at gmail.com Thu Apr 9 01:29:44 2020 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 08 Apr 2020 23:29:44 -0700 Subject: [petsc-users] DMPlex partition problem In-Reply-To: References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> <34F1B090-F3B7-4721-BFF2-7F8EE02B6CD6@gmail.com> Message-ID: <5BB716B9-71F3-4F87-9ED3-FF0BF5562727@gmail.com> Hi Matt, After some modification of your code (index from zero to one and gmine condition change), it works now. To calculate number of local owned cells, the following code is used. ??????????num_cells_loc = num_cells ????????? do ipoint = 1, nleaves ??????????? if (gmine(ipoint) < istart) then ????????????? num_cells_loc = num_cells_loc - 1 ??????????? end if ????????? end do To calculate cell node index, the following code is used ????? do ipoint = 0, istart-1 ??????? icell = ipoint + 1 ??????? call DMPlexGetTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr) ????? ??CHKERRQ(ierr) ??????? ivtex = 0 ??????? do icl = 1,size(nClosure),2 ??????? ??if (nClosure(icl) >= istart .and. nClosure(icl) < iend) then ??????? ????ivtex = ivtex + 1 ?????? ?????cell_node_idx(ivtex,ipoint+1) = nClosure(icl) - istart + 1 ?????? ???end if ?????? ?end do ?????? ?call DMPlexRestoreTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr) ?????? ?CHKERRQ(ierr) ?????end do Thanks for your invaluable help. I really appreciate that. All the best, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 5:55 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 8:46 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 5:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 8:18 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 4:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:47 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 4:41 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: Hi Matt, I am one step closer now. When run the ex1 code with ?-interpolate?, the partition is good, without it, it?s weird. Crap! That did not even occur to me. Yes, the dual graph construction will not work for uninterpolated wedges. So, do you really need an uninterpolated mesh? If so, I can put it on the buglist. For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, the partition is pretty good without interpolate. That?s why I didn?t have problem for all my previous simulations using the other cell types. What I mean is, are you avoiding interpolating the mesh for memory? The amount of memory is usually small compared to fields on the mesh. No, not because of memory consumption problem. When the code was first written several years ago, I just put interpolate = false there. Now after setting interpolate = true, I need to update the code in setting cell-node index (array cell). The following code does not work anymore when interpolate = true. There is some code that is not well written and it needs to be improved. !c add local to global cell id mapping do ipoint = 0, istart-1 icell = ipoint + 1 call DMPlexGetCone(dmda_flow%da,ipoint,cone,ierr) CHKERRQ(ierr) do ivtex = 1, num_nodes_per_cell cell_node_idx(:,ipoint+1) = cone - istart + 1 end do call DMPlexRestoreCone(dmda_flow%da,ipoint,cone,ierr) CHKERRQ(ierr) end do My F90 is a bit shaky, but I think you want PetscInt, pointer :: nClosure(:) do ipoint = 0, istart-1 icell = ipoint + 1 call DMPlexGetClosure(dmda_flow%da,ipoint,cone,ierr);CHKERRQ(ierr) call DMPlexGetTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) ivtex = 0 do icl = 0,len(nClosure)-1,2 if ((nClosure(icl) >= vStart) .and. (nClosure(icl) < vEnd)) then cell_node_idx(ivtex,ipoint+1) = nClosure(icl) - istart + 1 ivtex = ivtex + 1 end if end do call DMPlexRestoreTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) end do Basically, you use the closure, and filter out everything that is not a vertex. Thanks, Matt. Would you mind give some tips on the following code as well. The last line of following section (num_cells_loc = num_cells+num_nodes-nleaves-num_nodes_loc) does not work either when interpolate == true. Sure. The reason that it does not work is that it assumes the only shared things are vertices and cells. Now we also share edges and faces. I think its easier to just compute it directly: num_cells_loc = num_cells do l = 0,nleaves-1 if ((gmine(l) >= cStart) .and. (gmine(l) < cEnd)) then num_cells_loc = num_cells_loc - 1 end if end do Thanks, Matt !c get local mesh DM call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) CHKERRQ(ierr) call DMGetCoordinateDM(dmda_flow%da,cda,ierr) CHKERRQ(ierr) call DMGetSection(cda,cs,ierr) CHKERRQ(ierr) call PetscSectionGetChart(cs,istart,iend,ierr) CHKERRQ(ierr) !c Calculate number of nodes/cells with ghost nodes/cells for each processor num_nodes = iend-istart num_cells = istart !c Calculate local number of nodes without ghost nodes num_nodes_loc = 0 do ipoint = istart, iend-1 call DMPlexGetPointGlobal(cda,ipoint,pstart,pend,ierr) CHKERRQ(ierr) if (pend >= 0) then num_nodes_loc = num_nodes_loc + 1 end if end do !c Calculate number of cells without ghost cells for each processor call DMGetPointSF(dmda_flow%da,sf,ierr) CHKERRQ(ierr) call PetscSFGetGraph(sf,nroots,nleaves,gmine,gremote,ierr) CHKERRQ(ierr) !!!!!This calculation is correct when interpolate == false!!!!! !!!!! but incorrect when interpolate == true !!!!! num_cells_loc = num_cells+num_nodes-nleaves-num_nodes_loc Thanks, Danyang Thanks, Matt Thanks, Danyang Thanks, Matt Thanks, Danyang Thanks, Matt Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 2:12 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Attached is another prism mesh using 8 processors. The partition of the lower mesh does not looks good. Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 1:50 PM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem Hi Matt, Here is what I get using ex1c with stencil 0. There is no change in the source code. I just compile and run the code in different ways. By using ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make ex1? and then run the code using ?mpiexec -n ??, the partition does not looks good. My code has the same problem as this one if I use prism mesh. I just wonder what makes this difference, even without overlap. Thanks, Danyang From: Matthew Knepley Date: Wednesday, April 8, 2020 at 1:32 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 12:50 PM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: Hi Matt, Here is something pretty interesting. I modified ex1.c file with output of number of nodes and cells (as shown below) . And I also changed the stencil size to 1. /* get coordinates and section */ ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); num_nodes = iend-istart; num_cells = istart; /* Output rank and processor information */ printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, num_nodes, num_cells); If I compile the code using ?make ex1? and then run the test using ?mpiexec -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the modified ex1f90 code I sent. ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 2, num_nodes 699, num_cess 824 rank 0: of nprcs: 2, num_nodes 699, num_cess 824 Ah, I was not looking closely. You are asking for a cell overlap of 1 in the partition. That is why these numbers sum to more than the total in the mesh. Do you want a cell overlap of 1? Yes, I need cell overlap of 1 in some circumstance. The mesh has two layers of cells with 412 cells per layer and three layers of nodes with 233 nodes per layer. The number of cells looks good to me. I am confused why the same code generates pretty different partition. If I set the stencil to 0, I get following results. The first method looks good and the second one is not a good choice, with much more number of ghost nodes. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo rank 0: of nprcs: 2, num_nodes 466, num_cess 412 rank 1: of nprcs: 2, num_nodes 466, num_cess 412 I think this might just be a confusion over interpretation. Here is how partitioning works: 1) We partition the mesh cells using ParMetis, Chaco, etc. 2) We move those cells (and closures) to the correct processes 3) If you ask for overlap, we mark a layer of adjacent cells on remote processes and move them to each process The original partitions are the same, Then we add extra cells, and their closures, to each partition. This is what you are asking for. You would get the same answer with GMsh if it gave you an overlap region. Thanks, Matt Thanks, Danyang Thanks, Matt ? tests mpiexec -n 4 ./ex1 -filename basin2layer.exo rank 1: of nprcs: 4, num_nodes 432, num_cess 486 rank 0: of nprcs: 4, num_nodes 405, num_cess 448 rank 2: of nprcs: 4, num_nodes 411, num_cess 464 rank 3: of nprcs: 4, num_nodes 420, num_cess 466 However, if I compile and run the code using the script you shared, I get reasonable results. ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 ? petsc-3.13.0 make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 Is there some difference in compiling or runtime options that cause the difference? Would you please check if you can reproduce the same problem using the modified ex1.c? Thanks, Danyang From: Danyang Su Date: Wednesday, April 8, 2020 at 9:37 AM To: Matthew Knepley Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem From: Matthew Knepley Date: Wednesday, April 8, 2020 at 9:20 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: From: Matthew Knepley Date: Wednesday, April 8, 2020 at 6:45 AM To: Danyang Su Cc: PETSc Subject: Re: [petsc-users] DMPlex partition problem On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: Dear All, Hope you are safe and healthy. I have a question regarding pretty different partition results of prism mesh. The partition in PETSc generates much more ghost nodes/cells than the partition in Gmsh, even though both use metis as partitioner. Attached please find the prism mesh in both vtk and exo format, the test code modified based on ex1f90 example. Similar problem are observed for larger dataset with more layers. I will figure this out by next week. I have run your mesh and do not get those weird partitions. I am running in master. What are you using? Also, here is an easy way to do this using a PETSc test: cd $PETSC_DIR make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 and then load mesh.xmf into Paraview. Here is what I see (attached). Is it possible for you to try the master branch? Hi Matt, Thanks for your quick response. If I use your script, the partition looks good, as shown in the attached figure. I am working on PETSc 3.13.0 release version on Mac OS. Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c I looked at your code and cannot see any difference. Also, no changes are in master that are not in 3.13. This is very strange. I guess we will have to go one step at a time between the example and your code. I will add mesh output to the ex1f90 example and check if the cell/vertex rank is exactly the same. I wrote the mesh output myself based on the partition but there should be no problem in that part. The number of ghost nodes and cells is pretty easy to check. Not sure if there is any difference between the C code and Fortran code that causes the problem. Anyway, I will keep you updated. Thanks, Matt Thanks, Matt Thanks, Matt For example, in Gmsh, I get partition results using two processors and four processors as shown below, which are pretty reasonable. However, in PETSc, the partition looks a bit weird. Looks like it takes layer partition first and then inside layer. If the number of nodes per layer is very large, this kind of partitioning results into much more ghost nodes/cells. Anybody know how to improve the partitioning in PETSc? I have tried parmetis and chaco. There is no big difference between them. Thanks, Danyang -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404240 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281271 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457838 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542688 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349319 bytes Desc: not available URL: From knepley at gmail.com Thu Apr 9 06:18:48 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Apr 2020 07:18:48 -0400 Subject: [petsc-users] DMPlex partition problem In-Reply-To: <5BB716B9-71F3-4F87-9ED3-FF0BF5562727@gmail.com> References: <4E29C9D2-0AEE-4F66-BB0B-5D666C67F7FF@gmail.com> <186C4C0D-37BF-4045-88E5-D8FE0C9AE127@gmail.com> <32B808D2-E236-468B-B235-9906BC5F5736@gmail.com> <80B9C23B-ED44-421D-96F6-3EB27BA3805B@gmail.com> <34F1B090-F3B7-4721-BFF2-7F8EE02B6CD6@gmail.com> <5BB716B9-71F3-4F87-9ED3-FF0BF5562727@gmail.com> Message-ID: Thanks for all the hard work tracking down this partitioning bug. I will put in an error code if someone tries that. I am really glad its working again. Thanks, Matt On Thu, Apr 9, 2020 at 2:29 AM Danyang Su wrote: > Hi Matt, > > > > After some modification of your code (index from zero to one and gmine > condition change), it works now. > > To calculate number of local owned cells, the following code is used. > > num_cells_loc = num_cells > > do ipoint = 1, nleaves > > if (gmine(ipoint) < istart) then > > num_cells_loc = num_cells_loc - 1 > > end if > > end do > > > > To calculate cell node index, the following code is used > > do ipoint = 0, istart-1 > > > > icell = ipoint + 1 > > call DMPlexGetTransitiveClosure(dmda_flow% > da,ipoint,PETSC_TRUE,nClosure,ierr) > > CHKERRQ(ierr) > > > > ivtex = 0 > > do icl = 1,size(nClosure),2 > > if (nClosure(icl) >= istart .and. nClosure(icl) < iend) then > > ivtex = ivtex + 1 > > cell_node_idx(ivtex,ipoint+1) = nClosure(icl) - istart + 1 > > end if > > end do > > call DMPlexRestoreTransitiveClosure(dmda_flow% > da,ipoint,PETSC_TRUE,nClosure,ierr) > > CHKERRQ(ierr) > > end do > > > > Thanks for your invaluable help. I really appreciate that. > > > > All the best, > > > > Danyang > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 5:55 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 8:46 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 5:32 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 8:18 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 4:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:47 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 4:41 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 5:52 PM Danyang Su wrote: > > Hi Matt, > > > > I am one step closer now. When run the ex1 code with ?-interpolate?, the > partition is good, without it, it?s weird. > > Crap! That did not even occur to me. Yes, the dual graph construction > will not work for uninterpolated wedges. > > So, do you really need an uninterpolated mesh? If so, I can put it on the > buglist. > > > > For Prism mesh, I am afraid so. For 2D triangle mesh and 3D tetra mesh, > the partition is pretty good without interpolate. That?s why I didn?t have > problem for all my previous simulations using the other cell types. > > > > What I mean is, are you avoiding interpolating the mesh for memory? The > amount of memory is usually small compared to > > fields on the mesh. > > > > No, not because of memory consumption problem. When the code was first > written several years ago, I just put interpolate = false there. Now after > setting interpolate = true, I need to update the code in setting cell-node > index (array cell). The following code does not work anymore when > interpolate = true. There is some code that is not well written and it > needs to be improved. > > *!c add local to global cell id mapping* > > do ipoint = 0, istart-1 > > icell = ipoint + 1 > > call DMPlexGetCone(dmda_flow%da,ipoint,cone,ierr) > > CHKERRQ(ierr) > > do ivtex = 1, num_nodes_per_cell > > cell_node_idx(:,ipoint+1) = cone - istart + 1 > > end do > > call DMPlexRestoreCone(dmda_flow%da,ipoint,cone,ierr) > > CHKERRQ(ierr) > > end do > > > > > > My F90 is a bit shaky, but I think you want > > > > PetscInt, pointer :: nClosure(:) > > do ipoint = 0, istart-1 > icell = ipoint + 1 > call DMPlexGetClosure(dmda_flow%da,ipoint,cone,ierr);CHKERRQ(ierr) > call > DMPlexGetTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) > ivtex = 0 > do icl = 0,len(nClosure)-1,2 > if ((nClosure(icl) >= vStart) .and. (nClosure(icl) < vEnd)) then > cell_node_idx(ivtex,ipoint+1) = nClosure(icl) - istart + 1 > ivtex = ivtex + 1 > end if > end do > call > DMPlexRestoreTransitiveClosure(dmda_flow%da,ipoint,PETSC_TRUE,nClosure,ierr);CHKERRQ(ierr) > end do > > > > Basically, you use the closure, and filter out everything that is not a > vertex. > > > > Thanks, Matt. Would you mind give some tips on the following code as well. > The last line of following section (num_cells_loc = num_cells+num_nodes- > nleaves-num_nodes_loc) does not work either when *interpolate == true.* > > > > Sure. The reason that it does not work is that it assumes the only shared > things are vertices and cells. Now we also share edges and faces. I think > > its easier to just compute it directly: > > > > num_cells_loc = num_cells > > do l = 0,nleaves-1 > > if ((gmine(l) >= cStart) .and. (gmine(l) < cEnd)) then > > num_cells_loc = num_cells_loc - 1 > > end if > > end do > > > > Thanks, > > > > Matt > > > > > > *!c get local mesh DM* > > call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) > > CHKERRQ(ierr) > > > > call DMGetCoordinateDM(dmda_flow%da,cda,ierr) > > CHKERRQ(ierr) > > > > call DMGetSection(cda,cs,ierr) > > CHKERRQ(ierr) > > > > call PetscSectionGetChart(cs,istart,iend,ierr) > > CHKERRQ(ierr) > > > > *!c Calculate number of nodes/cells with ghost nodes/cells for each > processor* > > num_nodes = iend-istart > > num_cells = istart > > > > *!c Calculate local number of nodes without ghost nodes* > > num_nodes_loc = 0 > > do ipoint = istart, iend-1 > > call DMPlexGetPointGlobal(cda,ipoint,pstart,pend,ierr) > > CHKERRQ(ierr) > > if (pend >= 0) then > > num_nodes_loc = num_nodes_loc + 1 > > end if > > end do > > > > *!c Calculate number of cells without ghost cells for each > processor* > > call DMGetPointSF(dmda_flow%da,sf,ierr) > > CHKERRQ(ierr) > > > > call PetscSFGetGraph(sf,nroots,nleaves,gmine,gremote,ierr) > > CHKERRQ(ierr) > > > > *!!!!!This calculation is correct when interpolate == false!!!!!* > > *!!!!! but incorrect when interpolate == true !!!!!* > > num_cells_loc = num_cells+num_nodes-nleaves-num_nodes_loc > > > > Thanks, > > > > Danyang > > > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Danyang > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Danyang > > > > Thanks, > > > > Matt > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 2:12 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Attached is another prism mesh using 8 processors. The partition of the > lower mesh does not looks good. > > > > > > Thanks, > > > > Danyang > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 1:50 PM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > Hi Matt, > > > > Here is what I get using ex1c with stencil 0. There is no change in the > source code. I just compile and run the code in different ways. By using > ?make -f ./gmakefile ?.?, it works as expected. However, by using ?make > ex1? and then run the code using ?mpiexec -n ??, the partition does not > looks good. My code has the same problem as this one if I use prism mesh. > > > > I just wonder what makes this difference, even without overlap. > > > > > > Thanks, > > > > Danyang > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 1:32 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 4:26 PM Danyang Su wrote: > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 12:50 PM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 3:22 PM Danyang Su wrote: > > Hi Matt, > > > > Here is something pretty interesting. I modified ex1.c file with output of > number of nodes and cells (as shown below) . And I also changed the > stencil size to 1. > > > > /* get coordinates and section */ > > ierr = DMGetCoordinatesLocal(*dm,&gc);CHKERRQ(ierr); > > ierr = DMGetCoordinateDM(*dm,&cda);CHKERRQ(ierr); > > ierr = DMGetSection(cda,&cs);CHKERRQ(ierr); > > ierr = PetscSectionGetChart(cs,&istart,&iend);CHKERRQ(ierr); > > > > num_nodes = iend-istart; > > num_cells = istart; > > > > /* Output rank and processor information */ > > printf("rank %d: of nprcs: %d, num_nodes %d, num_cess %d\n", rank, size, > num_nodes, num_cells); > > > > > > If I compile the code using ?make ex1? and then run the test using ?mpiexec > -n 2 ./ex1 -filename basin2layer.exo?, I get the same problem as the > modified ex1f90 code I sent. > > *?* *tests* mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 2, num_nodes 699, num_cess 824 > > rank 0: of nprcs: 2, num_nodes 699, num_cess 824 > > Ah, I was not looking closely. You are asking for a cell overlap of 1 in > the partition. That is why these numbers sum to more than > > the total in the mesh. Do you want a cell overlap of 1? > > > > Yes, I need cell overlap of 1 in some circumstance. The mesh has two > layers of cells with 412 cells per layer and three layers of nodes with > 233 nodes per layer. The number of cells looks good to me. I am confused > why the same code generates pretty different partition. If I set the > stencil to 0, I get following results. The first method looks good and the > second one is not a good choice, with much more number of ghost nodes. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 1: of nprcs: 2, num_nodes 354, num_cess 392 > > # > rank 0: of nprcs: 2, num_nodes 384, num_cess 432 > > > > ? tests mpiexec -n 2 ./ex1 -filename basin2layer.exo > > rank 0: of nprcs: 2, num_nodes 466, num_cess 412 > > rank 1: of nprcs: 2, num_nodes 466, num_cess 412 > > > > I think this might just be a confusion over interpretation. Here is how > partitioning works: > > > > 1) We partition the mesh cells using ParMetis, Chaco, etc. > > > > 2) We move those cells (and closures) to the correct processes > > > > 3) If you ask for overlap, we mark a layer of adjacent cells on remote > processes and move them to each process > > > > The original partitions are the same, Then we add extra cells, and their > closures, to each partition. This is what you are asking for. > > You would get the same answer with GMsh if it gave you an overlap region. > > > > Thanks, > > > > Matt > > > > Thanks, > > Danyang > > > > > > Thanks, > > > > Matt > > *?* *tests* mpiexec -n 4 ./ex1 -filename basin2layer.exo > > rank 1: of nprcs: 4, num_nodes 432, num_cess 486 > > rank 0: of nprcs: 4, num_nodes 405, num_cess 448 > > rank 2: of nprcs: 4, num_nodes 411, num_cess 464 > > rank 3: of nprcs: 4, num_nodes 420, num_cess 466 > > > > However, if I compile and run the code using the script you shared, I get > reasonable results. > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=2 > > # > rank 0: of nprcs: 2, num_nodes 429, num_cess 484 > > # > rank 1: of nprcs: 2, num_nodes 402, num_cess 446 > > > > *?* *petsc-3.13.0* make -f ./gmakefile test > globsearch="dm_impls_plex_tests-ex1_cylinder" EXTRA_OPTIONS="-filename > ./basin2layer.exo -dm_view hdf5:$PWD/mesh.h5 -dm_partition_view" NP=4 > > # > rank 1: of nprcs: 4, num_nodes 246, num_cess 260 > > # > rank 2: of nprcs: 4, num_nodes 264, num_cess 274 > > # > rank 3: of nprcs: 4, num_nodes 264, num_cess 280 > > # > rank 0: of nprcs: 4, num_nodes 273, num_cess 284 > > > > Is there some difference in compiling or runtime options that cause the > difference? Would you please check if you can reproduce the same problem > using the modified ex1.c? > > > > Thanks, > > > > Danyang > > > > *From: *Danyang Su > *Date: *Wednesday, April 8, 2020 at 9:37 AM > *To: *Matthew Knepley > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > > > > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 9:20 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 12:13 PM Danyang Su wrote: > > *From: *Matthew Knepley > *Date: *Wednesday, April 8, 2020 at 6:45 AM > *To: *Danyang Su > *Cc: *PETSc > *Subject: *Re: [petsc-users] DMPlex partition problem > > > > On Wed, Apr 8, 2020 at 7:25 AM Matthew Knepley wrote: > > On Wed, Apr 8, 2020 at 12:48 AM Danyang Su wrote: > > Dear All, > > > > Hope you are safe and healthy. > > > > I have a question regarding pretty different partition results of prism > mesh. The partition in PETSc generates much more ghost nodes/cells than the > partition in Gmsh, even though both use metis as partitioner. Attached > please find the prism mesh in both vtk and exo format, the test code > modified based on ex1f90 example. Similar problem are observed for larger > dataset with more layers. > > > > I will figure this out by next week. > > > > I have run your mesh and do not get those weird partitions. I am running > in master. What are you using? Also, here is an easy way > > to do this using a PETSc test: > > > > cd $PETSC_DIR > > make -f ./gmakefile test globsearch="dm_impls_plex_tests-ex1_cylinder" > EXTRA_OPTIONS="-filename ${HOME}/Downloads/basin2layer.exo -dm_view > hdf5:$PWD/mesh.h5 -dm_partition_view" NP=5 > > ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 > > > > and then load mesh.xmf into Paraview. Here is what I see (attached). Is it > possible for you to try the master branch? > > > > Hi Matt, > > > > Thanks for your quick response. If I use your script, the partition looks > good, as shown in the attached figure. I am working on PETSc 3.13.0 release > version on Mac OS. > > > > Does the above script use code /petsc/src/dm/label/tutorials/ex1c.c? > > > > > > It uses $PETSC_DIR/src/dm/impls/plex/tests/ex1.c > > > > I looked at your code and cannot see any difference. Also, no changes are > in master that are not in 3.13. This is very strange. > > I guess we will have to go one step at a time between the example and your > code. > > > > I will add mesh output to the ex1f90 example and check if the cell/vertex > rank is exactly the same. I wrote the mesh output myself based on the > partition but there should be no problem in that part. The number of ghost > nodes and cells is pretty easy to check. Not sure if there is any > difference between the C code and Fortran code that causes the problem. > Anyway, I will keep you updated. > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > Thanks, > > > > Matt > > > > For example, in Gmsh, I get partition results using two processors and > four processors as shown below, which are pretty reasonable. > > > > > > However, in PETSc, the partition looks a bit weird. Looks like it takes > layer partition first and then inside layer. If the number of nodes per > layer is very large, this kind of partitioning results into much more ghost > nodes/cells. > > > > Anybody know how to improve the partitioning in PETSc? I have tried > parmetis and chaco. There is no big difference between them. > > > > > > > > Thanks, > > > > Danyang > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 404240 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 281271 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 457838 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 542688 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 349319 bytes Desc: not available URL: From heepark at sandia.gov Thu Apr 9 14:39:19 2020 From: heepark at sandia.gov (Park, Heeho) Date: Thu, 9 Apr 2020 19:39:19 +0000 Subject: [petsc-users] J^T J p calculation In-Reply-To: <1586457072808.81181@sandia.gov> References: <1586457072808.81181@sandia.gov> Message-ID: <1586461159306.47379@sandia.gov> Hi PETSc developers, I am trying to formulate p^T J^T J p where p is a solution vector length n, J is Jacobian n-by-n matrix. Ref: https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm under Large-scale optimization It is known that for parallel computations, the best way to perform this computation (with csr matrix) is J^T J p = SUM_i [ c_i ( c_i dot p) ] which results in a vector. In PFLOTRAN, we use mpibaij matrix. I could not find a PETSc command to perform this with one command. Is there one I couldn't find? if not, is using MATGETROW with C for-loop a good way to do this? ? I currently use, MatMult(J,p,w); MatMultTranspose(J,w,w2); VecDotRealPart(p,w2); Best, Heeho Daniel Park ! ------------------------------------ ! Sandia National Laboratories Org: 08844, R&D Work: 505-844-1319 ! ------------------------------------ ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 9 15:44:40 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Apr 2020 16:44:40 -0400 Subject: [petsc-users] J^T J p calculation In-Reply-To: <1586461159306.47379@sandia.gov> References: <1586457072808.81181@sandia.gov> <1586461159306.47379@sandia.gov> Message-ID: On Thu, Apr 9, 2020 at 3:41 PM Park, Heeho via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi PETSc developers, > > > I am trying to formulate > > > p^T J^T J p > > > where p is a solution vector length n, J is Jacobian n-by-n matrix. > > > Ref: https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm under > Large-scale optimization > > > It is known that for parallel computations, the best way to perform this > computation (with csr matrix) is > > > J^T J p = SUM_i [ c_i ( c_i dot p) ] which results in a vector. > I don't think so. Why not just do y = J p y^t y = p^T J^T J p Thanks, Matt > In PFLOTRAN, we use mpibaij matrix. I could not find a PETSc command to > perform this with one command. > > Is there one I couldn't find? if not, is using MATGETROW with C for-loop a > good way to do this? > > > I currently use, MatMult(J,p,w); MatMultTranspose(J,w,w2); > VecDotRealPart(p,w2); > > > Best, > > > Heeho Daniel Park > > ! ------------------------------------ ! > Sandia National Laboratories > Org: 08844, R&D > Work: 505-844-1319 > ! ------------------------------------ ! > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tisaac at cc.gatech.edu Thu Apr 9 15:45:58 2020 From: tisaac at cc.gatech.edu (Isaac, Tobin G) Date: Thu, 9 Apr 2020 20:45:58 +0000 Subject: [petsc-users] J^T J p calculation In-Reply-To: <1586461159306.47379@sandia.gov> References: <1586457072808.81181@sandia.gov>,<1586461159306.47379@sandia.gov> Message-ID: I don't understand, why don't you just take the dot product of w with itself? Toby Isaac, Assistant Professor, GTCSE ________________________________ From: petsc-users on behalf of Park, Heeho via petsc-users Sent: Thursday, April 9, 2020 3:39:19 PM To: petsc-users at mcs.anl.gov Subject: [petsc-users] J^T J p calculation Hi PETSc developers, I am trying to formulate p^T J^T J p where p is a solution vector length n, J is Jacobian n-by-n matrix. Ref: https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm under Large-scale optimization It is known that for parallel computations, the best way to perform this computation (with csr matrix) is J^T J p = SUM_i [ c_i ( c_i dot p) ] which results in a vector. In PFLOTRAN, we use mpibaij matrix. I could not find a PETSc command to perform this with one command. Is there one I couldn't find? if not, is using MATGETROW with C for-loop a good way to do this? ? I currently use, MatMult(J,p,w); MatMultTranspose(J,w,w2); VecDotRealPart(p,w2); Best, Heeho Daniel Park ! ------------------------------------ ! Sandia National Laboratories Org: 08844, R&D Work: 505-844-1319 ! ------------------------------------ ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From heepark at sandia.gov Thu Apr 9 16:01:30 2020 From: heepark at sandia.gov (Park, Heeho) Date: Thu, 9 Apr 2020 21:01:30 +0000 Subject: [petsc-users] [EXTERNAL] Re: J^T J p calculation In-Reply-To: References: <1586457072808.81181@sandia.gov> <1586461159306.47379@sandia.gov>, Message-ID: <4CE611A8-3C74-4175-8C19-879F5E3AD1DC@sandia.gov> Right. That sounds good Heeho Park On Apr 9, 2020, at 2:44 PM, Matthew Knepley wrote: ? On Thu, Apr 9, 2020 at 3:41 PM Park, Heeho via petsc-users > wrote: Hi PETSc developers, I am trying to formulate p^T J^T J p where p is a solution vector length n, J is Jacobian n-by-n matrix. Ref: https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm under Large-scale optimization It is known that for parallel computations, the best way to perform this computation (with csr matrix) is J^T J p = SUM_i [ c_i ( c_i dot p) ] which results in a vector. I don't think so. Why not just do y = J p y^t y = p^T J^T J p Thanks, Matt In PFLOTRAN, we use mpibaij matrix. I could not find a PETSc command to perform this with one command. Is there one I couldn't find? if not, is using MATGETROW with C for-loop a good way to do this? I currently use, MatMult(J,p,w); MatMultTranspose(J,w,w2); VecDotRealPart(p,w2); Best, Heeho Daniel Park ! ------------------------------------ ! Sandia National Laboratories Org: 08844, R&D Work: 505-844-1319 ! ------------------------------------ ! -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Apr 9 16:20:24 2020 From: jed at jedbrown.org (Jed Brown) Date: Thu, 09 Apr 2020 15:20:24 -0600 Subject: [petsc-users] J^T J p calculation In-Reply-To: <1586461159306.47379@sandia.gov> References: <1586457072808.81181@sandia.gov> <1586461159306.47379@sandia.gov> Message-ID: <87eesw5pbb.fsf@jedbrown.org> "Park, Heeho via petsc-users" writes: > Hi PETSc developers, > > > I am trying to formulate > > > p^T J^T J p > > > where p is a solution vector length n, J is Jacobian n-by-n matrix. > > > Ref: https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm under Large-scale optimization > > > It is known that for parallel computations, the best way to perform this computation (with csr matrix) is > > > J^T J p = SUM_i [ c_i ( c_i dot p) ] which results in a vector. Sounds like you just want to apply J^T J to a vector, not compute it as a matrix (e.g., for preconditioning). You can solve such systems with KSPLSQR, but the preconditioner is normally constructed using J^T J (or an approximation thereof). If you wast want to apply the operator to a matrix, use J^T (J p), which is much more efficient ("it is known" notwithstanding). From rlmackie862 at gmail.com Fri Apr 10 13:00:32 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 10 Apr 2020 11:00:32 -0700 Subject: [petsc-users] Question on VecScatter options Message-ID: <9ACB6298-49BB-47D9-AE68-05AFB03E748A@gmail.com> The VecScatter man page says that the default vecscatter type uses PetscSF, and that one can use PetscSF options to control the communication. PetscSFCreate lists 3 different types, including MPI-3 options. So I?m wondering is it enough to just add, for example, -sf_type neighbor to the list of PETSc options to have all VecScatter calls use MPI 3, and are there advantages to using that over the default options? Thanks, Randy M. From junchao.zhang at gmail.com Fri Apr 10 13:33:47 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Fri, 10 Apr 2020 13:33:47 -0500 Subject: [petsc-users] Question on VecScatter options In-Reply-To: <9ACB6298-49BB-47D9-AE68-05AFB03E748A@gmail.com> References: <9ACB6298-49BB-47D9-AE68-05AFB03E748A@gmail.com> Message-ID: sf_type window is not recommended, since it is very unlikely to bring any benefit. sf_type neighbor uses MPI-3.0 MPI_Ineighbor_alltoallv() for communication. If the the MPI implementation you use has optimized the so-called neighborhood communication, then using this option may improve performance, otherwise, it has no benefit than sf_type basic. sf_type basic uses MPI_Isend/Irecv. VecScatter is usually done through that, except some well-structured ones, such as VecScatterCreateToZero/All, which ultimately use MPI_I(all)gatherv (but are still through PetscSF). --Junchao Zhang On Fri, Apr 10, 2020 at 1:01 PM Randall Mackie wrote: > The VecScatter man page says that the default vecscatter type uses > PetscSF, and that one can use PetscSF options to control the communication. > > > PetscSFCreate lists 3 different types, including MPI-3 options. > > > So I?m wondering is it enough to just add, for example, -sf_type neighbor > to the list of PETSc options to have all VecScatter calls use MPI 3, and > are there advantages to using that over the default options? > > > Thanks, > > Randy M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat Apr 11 17:50:09 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 11 Apr 2020 18:50:09 -0400 Subject: [petsc-users] PetscObjectCompose error Message-ID: I am trying to compose a ISColoring to a Mat. THe code works, I know JacP and iscoloring are valid Mat and ISColoring. I have this: ierr = ((PetscObject)JacP,"coloring",(PetscObject)iscoloring);CHKERRQ(ierr); But it says my ISColoring is not valid. Any suggestions? [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Corrupt argument: https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: Invalid type of object: Parameter # 3 [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.13-99-gc486425 GIT Date: 2020-04-10 07:39:29 -0400 [0]PETSC ERROR: ./ex11 on a arch-summit-dbg-gnu-cuda named d25n09 by adams Sat Apr 11 18:45:44 2020 [0]PETSC ERROR: Configure options --with-fc=0 --COPTFLAGS="-g -O0 -fPIC" --CXXOPTFLAGS="-g -O0 -fPIC" --FOPTFLAGS="-g -O0 -fPIC" --CUDAOPTFLAGS="-O0 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis --with-make-np=16 --download-parmetis --download-ctetgen --download-amgx --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 --with-64-bit-indices=0 --with-debugging=1 PETSC_ARCH=arch-summit-dbg-gnu-cuda --with-openmp --force [0]PETSC ERROR: #1 PetscObjectCompose() line 727 in /autofs/nccs-svm1_home1/adams/petsc/src/sys/objects/inherit.c [0]PETSC ERROR: #2 FPLandauCUDAJacobian() line 697 in /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/landau.cu [0]PETSC ERROR: #3 FormLandau() line 531 in /autofs/nccs-svm1_home1/adams/petsc/src/dm/impls/plex/xgc_dmplex.c [0]PETSC ERROR: #4 FPLandIFunction() line 765 in /autofs/nccs-svm1_home1/adams/petsc/src/dm/impls/plex/xgc_dmplex.c [0]PETSC ERROR: #5 REIFunction() line 736 in ex11.c [0]PETSC ERROR: #6 TSComputeIFunction() line 894 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #7 SNESTSFormFunction_ARKIMEX() line 1034 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/impls/arkimex/arkimex.c [0]PETSC ERROR: #8 SNESTSFormFunction() line 4983 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #9 SNESComputeFunction() line 2383 in /autofs/nccs-svm1_home1/adams/petsc/src/snes/interface/snes.c [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 175 in /autofs/nccs-svm1_home1/adams/petsc/src/snes/impls/ls/ls.c [0]PETSC ERROR: #11 SNESSolve() line 4520 in /autofs/nccs-svm1_home1/adams/petsc/src/snes/interface/snes.c [0]PETSC ERROR: #12 TSStep_ARKIMEX() line 811 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/impls/arkimex/arkimex.c [0]PETSC ERROR: #13 TSStep() line 3721 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #14 TSSolve() line 4127 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Sat Apr 11 19:23:01 2020 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Sun, 12 Apr 2020 03:23:01 +0300 Subject: [petsc-users] PetscObjectCompose error In-Reply-To: References: Message-ID: On Sun, 12 Apr 2020 at 01:50, Mark Adams wrote: > I am trying to compose a ISColoring to a Mat. THe code works, I know JacP > and iscoloring are valid Mat and ISColoring. I have this: > > ierr = > ((PetscObject)JacP,"coloring",(PetscObject)iscoloring);CHKERRQ(ierr); > > But it says my ISColoring is not valid. Any suggestions? > > ISColoring is not a PetscObject, you cannot compose it. struct _n_ISColoring { PetscInt refct; PetscInt n; /* number of colors */ IS *is; /* for each color indicates columns */ MPI_Comm comm; ISColoringValue *colors; /* for each column indicates color */ PetscInt N; /* number of columns */ ISColoringType ctype; PetscBool allocated; }; I guess your best option for now is to store it in a PetscContainer object with a destroy callback, and then compose the container. -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mukkundsunjii at gmail.com Mon Apr 13 05:54:57 2020 From: mukkundsunjii at gmail.com (MUKKUND SUNJII) Date: Mon, 13 Apr 2020 12:54:57 +0200 Subject: [petsc-users] Adaptivity Options in ex11.c Message-ID: <9EEC8058-E574-468A-A6A6-9D7F08F86DFB@gmail.com> Greetings, I have been trying to add on to the functionality of the adaptive grid solver featured in the ts/tutorials/ex11.c. I noticed that there is not an option to set the initial, minimum or maximum refinement levels in the adaptToleranceFVM() function where the grid adaption operation takes place. When I use the function DMForestSetMaximumRefinement(?), I am returned with the error saying that the 'DM has already been set up'. As a result, I looked into the DMForest tests and tutorials. I found that in dm/impls/forest/tests/ex2.c, various DMForests functions are being used. Nevertheless, the flow of operations of the grid adaption process is a bit different. In the test file, three DM objects are used : Base DM - PreForest DM - PostForest DM. Parameters such as the initial, and minimum refinement levels are assigned to the PreForest DM before it is set up. But in adaptToleranceFVM() in ex11.c, only 2 DM objects are used and hence these parameters cannot be assigned (i.e., the objects used are plex and adaptedDM). I tried to replicate the Base DM - PreForest DM - PostForest DM structure in adaptToleranceFVM() . However, I am returned with a multitude of errors. I do not include the error statements as I suspect that there is a fundamental problem with my adaption routine and a specific error won?t indicate the bigger problem. In order to not be verbose in the email, I have included the GitHub link of the code that I have been working on. You can find the adaptToleranceFVM() in lines 1542 to 1739. I?d very much appreciate any feedback or suggestions on this segment. Github Link : https://github.com/mukkund1996/petsc/blob/cbcf8824f12d1159c0f5dc84d028a4869a2a0594/amr/ex11_adapt_param.c#L1542 Thank you in advanced. Regards, Mukkund Sunjii -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 13 09:45:09 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Apr 2020 10:45:09 -0400 Subject: [petsc-users] Adaptivity Options in ex11.c In-Reply-To: <9EEC8058-E574-468A-A6A6-9D7F08F86DFB@gmail.com> References: <9EEC8058-E574-468A-A6A6-9D7F08F86DFB@gmail.com> Message-ID: On Mon, Apr 13, 2020 at 9:47 AM MUKKUND SUNJII wrote: > Greetings, > > I have been trying to add on to the functionality of the adaptive grid > solver featured in the ts/tutorials/ex11.c. > > I noticed that there is not an option to set the initial, minimum or > maximum refinement levels in the adaptToleranceFVM() function where the > grid adaption operation takes place. When I use the function > DMForestSetMaximumRefinement(?), I am returned with the error saying that > the 'DM has already been set up'. > 1) The initial refinement comes from the initial mesh you make 2) Do you call DMForestSetMinimumRefinement() before DMSetUp()? Thanks, Matt > As a result, I looked into the DMForest tests and tutorials. I found that > in dm/impls/forest/tests/ex2.c, various DMForests functions are being used. > Nevertheless, the flow of operations of the grid adaption process is a bit > different. In the test file, three DM objects are used :* Base DM - > PreForest DM - PostForest DM*. Parameters such as the initial, and > minimum refinement levels are assigned to the PreForest DM before it is set > up. But in adaptToleranceFVM() in ex11.c, only 2 DM objects are used and > hence these parameters cannot be assigned (i.e., the objects used are > *plex* and *adaptedDM*). > > I tried to replicate the *Base DM - PreForest DM - PostForest DM *structure > in adaptToleranceFVM() . However, I am returned with a multitude of > errors. I do not include the error statements as I suspect that there is > a fundamental problem with my adaption routine and a specific error won?t > indicate the bigger problem. > > In order to not be verbose in the email, I have included the GitHub link > of the code that I have been working on. You can find the > adaptToleranceFVM() in lines 1542 to 1739. I?d very much appreciate any > feedback or suggestions on this segment. > > *Github Link : * > https://github.com/mukkund1996/petsc/blob/cbcf8824f12d1159c0f5dc84d028a4869a2a0594/amr/ex11_adapt_param.c#L1542 > > Thank you in advanced. > > Regards, > > Mukkund Sunjii > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbllm2018 at hotmail.com Mon Apr 13 09:59:30 2020 From: lbllm2018 at hotmail.com (Bin Liu) Date: Mon, 13 Apr 2020 14:59:30 +0000 Subject: [petsc-users] inserting multiple rows together into a matrix Message-ID: Hi all, I know how to insert values in one row into the matrix via routine "MatSetValues". I understand I logically should be able to insert multiple rows into the matrix with one call of "MatSetValues". However, I am not sure how to do it. I searched in the PETSc mail list and did not find a relevant question answered before. Could anyone help me and give me a simple example code? Regards B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Mon Apr 13 10:32:38 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Mon, 13 Apr 2020 10:32:38 -0500 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: References: Message-ID: Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 m=2; n=3; idxm[] = {2, 4}; idxn[] = {3, 7, 9}; v[6] = {0.1, 0.2, ....}; MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); --Junchao Zhang On Mon, Apr 13, 2020 at 9:59 AM Bin Liu wrote: > Hi all, > > > > I know how to insert values in one row into the matrix via routine > ?MatSetValues?. I understand I logically should be able to insert multiple > rows into the matrix with one call of ?MatSetValues?. However, I am not > sure how to do it. I searched in the PETSc mail list and did not find a > relevant question answered before. Could anyone help me and give me a > simple example code? > > > > Regards > > B. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Apr 13 10:33:16 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 13 Apr 2020 11:33:16 -0400 Subject: [petsc-users] error with xlib Message-ID: I get this error configuring zlib, osx, with OpenMP. Any ideas? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1302312 bytes Desc: not available URL: From rlmackie862 at gmail.com Mon Apr 13 10:37:24 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Mon, 13 Apr 2020 08:37:24 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms Message-ID: Dear PETSc users, We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: /* Post the Isends with the message length-info */ for (i=0,j=0; i From knepley at gmail.com Mon Apr 13 10:46:25 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Apr 2020 11:46:25 -0400 Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: On Mon, Apr 13, 2020 at 11:34 AM Mark Adams wrote: > I get this error configuring zlib, osx, with OpenMP. > Any ideas? > This failed without output Executing: cd /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" ./configure && /usr/bin/make -j7 -l12.0 && /usr/bin/make install So execute each step in turn and see what fails. Thanks, Matt > Thanks, > Mark > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.fai at gmail.com Mon Apr 13 10:46:49 2020 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Mon, 13 Apr 2020 10:46:49 -0500 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: References: Message-ID: <21BF415B-7D0A-4EBC-B2A8-3023A251EA02@gmail.com> Also if you know that the rows/cols are contiguous (next to each other) in your sparse matrix then it is recommended to use MatSetValuesBlocked as it is more efficient. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) Cell: (312) 694-3391 > On Apr 13, 2020, at 10:32 AM, Junchao Zhang wrote: > > Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 > m=2; > n=3; > idxm[] = {2, 4}; > idxn[] = {3, 7, 9}; > v[6] = {0.1, 0.2, ....}; > MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); > > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 9:59 AM Bin Liu > wrote: > Hi all, > > > > I know how to insert values in one row into the matrix via routine ?MatSetValues?. I understand I logically should be able to insert multiple rows into the matrix with one call of ?MatSetValues?. However, I am not sure how to do it. I searched in the PETSc mail list and did not find a relevant question answered before. Could anyone help me and give me a simple example code? > > > > Regards > > B. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Mon Apr 13 10:53:19 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Mon, 13 Apr 2020 10:53:19 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: Message-ID: Randy, Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. Thanks. --Junchao Zhang On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie wrote: > Dear PETSc users, > > We are trying to understand an issue that has come up in running our code > on a large cloud cluster with a large number of processes and subcomms. > This is code that we use daily on multiple clusters without problems, and > that runs valgrind clean for small test problems. > > The run generates the following messages, but doesn?t crash, just seems to > hang with all processes continuing to show activity: > > [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in > /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c > [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in > /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c > [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in > /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c > [492]PETSC ERROR: #4 VecScatterCreate() line 282 in > /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c > > > Looking at line 117 in PetscGatherMessageLengths we find the offending > statement is the MPI_Isend: > > > /* Post the Isends with the message length-info */ > for (i=0,j=0; i if (ilengths[i]) { > ierr = > MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); > j++; > } > } > > We have tried this with Intel MPI 2018, 2019, and mpich, all giving the > same problem. > > We suspect there is some limit being set on this cloud cluster on the > number of file connections or something, but we don?t know. > > Anyone have any ideas? We are sort of grasping for straws at this point. > > Thanks, Randy M. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Mon Apr 13 10:54:19 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Mon, 13 Apr 2020 08:54:19 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: Message-ID: <3C78CEE1-8170-4051-9E89-0BF3BF424BDF@gmail.com> Thanks we?ll try and report back. Randy M. > On Apr 13, 2020, at 8:53 AM, Junchao Zhang wrote: > > Randy, > Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr > But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. > Thanks. > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: > Dear PETSc users, > > We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. > This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. > > The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: > > [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c > [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c > [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c > [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c > > > Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: > > > /* Post the Isends with the message length-info */ > for (i=0,j=0; i if (ilengths[i]) { > ierr = MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); > j++; > } > } > > We have tried this with Intel MPI 2018, 2019, and mpich, all giving the same problem. > > We suspect there is some limit being set on this cloud cluster on the number of file connections or something, but we don?t know. > > Anyone have any ideas? We are sort of grasping for straws at this point. > > Thanks, Randy M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Mon Apr 13 10:54:31 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Mon, 13 Apr 2020 10:54:31 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: Message-ID: --Junchao Zhang On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang wrote: > Randy, > Someone reported similar problem before. It turned out an Intel MPI > MPI_Allreduce bug. A workaround is setting the environment variable > I_MPI_ADJUST_ALLREDUCE=1.arr > Correct: I_MPI_ADJUST_ALLREDUCE=1 > But you mentioned mpich also had the error. So maybe the problem is not > the same. So let's try the workaround first. If it doesn't work, add > another petsc option -build_twosided allreduce, which is a workaround for > Intel MPI_Ibarrier bugs we met. > Thanks. > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: > >> Dear PETSc users, >> >> We are trying to understand an issue that has come up in running our code >> on a large cloud cluster with a large number of processes and subcomms. >> This is code that we use daily on multiple clusters without problems, and >> that runs valgrind clean for small test problems. >> >> The run generates the following messages, but doesn?t crash, just seems >> to hang with all processes continuing to show activity: >> >> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >> >> >> Looking at line 117 in PetscGatherMessageLengths we find the offending >> statement is the MPI_Isend: >> >> >> /* Post the Isends with the message length-info */ >> for (i=0,j=0; i> if (ilengths[i]) { >> ierr = >> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >> j++; >> } >> } >> >> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the >> same problem. >> >> We suspect there is some limit being set on this cloud cluster on the >> number of file connections or something, but we don?t know. >> >> Anyone have any ideas? We are sort of grasping for straws at this point. >> >> Thanks, Randy M. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Apr 13 11:32:09 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 13 Apr 2020 12:32:09 -0400 Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: Now that I look at it, I see: CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" Note the two ". That does not look right. I use 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', I know how to do stuff like: '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack' Is there like and os.exec that I could use like this for my FLAGS? On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley wrote: > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams wrote: > >> I get this error configuring zlib, osx, with OpenMP. >> Any ideas? >> > > This failed without output > > Executing: cd > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" CFLAGS="-fstack-protector > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" ./configure && > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > So execute each step in turn and see what fails. > > Thanks, > > Matt > > >> Thanks, >> Mark >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 13 11:41:43 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 13 Apr 2020 11:41:43 -0500 (CDT) Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: This is very funky >>> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" CXXOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" FOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" --download-parmetis=1 --download-metis=1 --download-hypre=1 --download-triangle=1 --download-p4est=1 --download-zlib --with-x=0 --download-ctetgen --with-debugging=0 --download-hdf5=1 PETSC_ARCH=arch-macosx-gnu-O-omp --with-openmp=1 --with-log=0 --with-threadsafety --download-chaco <<< -I"$(brew --prefix libomp)/include" type options to configure doesn't make sense. You are using bash syntax here - and expecting configure to resolve it. Its best for your bash shell to evaluate this before passing this info to configure Also --download-zlib isn't needed on OSX Satish On Mon, 13 Apr 2020, Mark Adams wrote: > Now that I look at it, I see: > > CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 -g > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > --prefix libomp)/lib -lomp"" > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > Note the two ". That does not look right. I use > > 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', > > I know how to do stuff like: > > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + > '/lib64 -lblas -llapack' > > Is there like and os.exec that I could use like this for my FLAGS? > > > > On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley wrote: > > > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams wrote: > > > >> I get this error configuring zlib, osx, with OpenMP. > >> Any ideas? > >> > > > > This failed without output > > > > Executing: cd > > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" CFLAGS="-fstack-protector > > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" ./configure && > > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > > > So execute each step in turn and see what fails. > > > > Thanks, > > > > Matt > > > > > >> Thanks, > >> Mark > >> > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > From balay at mcs.anl.gov Mon Apr 13 11:48:17 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 13 Apr 2020 11:48:17 -0500 (CDT) Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: You can do with either of the following notation [from your shell] ./configure "CXXOPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I$(brew --prefix libomp)/include -L$(brew --prefix libomp)/lib -lomp" or ./configure CXXOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I$(brew --prefix libomp)/include -L$(brew --prefix libomp)/lib -lomp" Satish On Mon, 13 Apr 2020, Satish Balay via petsc-users wrote: > This is very funky > > >>> > Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" CXXOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" FOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" --download-parmetis=1 --download-metis=1 --download-hypre=1 --download-triangle=1 --download-p4est=1 --download-zlib --with-x=0 --download-ctetgen --with-debugging=0 --download-hdf5=1 PETSC_ARCH=arch-macosx-gnu-O-omp --with-openmp=1 --with-log=0 --with-threadsafety --download-chaco > <<< > > -I"$(brew --prefix libomp)/include" type options to configure doesn't make sense. You are using bash syntax here - and expecting configure to resolve it. Its best for your bash shell to evaluate this before passing this info to configure > > Also --download-zlib isn't needed on OSX > > Satish > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > Now that I look at it, I see: > > > > CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 -g > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > --prefix libomp)/lib -lomp"" > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > Note the two ". That does not look right. I use > > > > 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', > > > > I know how to do stuff like: > > > > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + > > '/lib64 -lblas -llapack' > > > > Is there like and os.exec that I could use like this for my FLAGS? > > > > > > > > On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley wrote: > > > > > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams wrote: > > > > > >> I get this error configuring zlib, osx, with OpenMP. > > >> Any ideas? > > >> > > > > > > This failed without output > > > > > > Executing: cd > > > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > > > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" CFLAGS="-fstack-protector > > > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" ./configure && > > > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > > > > > So execute each step in turn and see what fails. > > > > > > Thanks, > > > > > > Matt > > > > > > > > >> Thanks, > > >> Mark > > >> > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > > > experiments is infinitely more interesting than any results to which their > > > experiments lead. > > > -- Norbert Wiener > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > From mfadams at lbl.gov Mon Apr 13 11:52:58 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 13 Apr 2020 12:52:58 -0400 Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: On Mon, Apr 13, 2020 at 12:48 PM Satish Balay wrote: > This is very funky > > >>> > Configure Options: --configModules=PETSc.Configure > --optionsModule=config.compilerOptions > --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-O2 -g > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > --prefix libomp)/lib -lomp"" CXXOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > FOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > --download-parmetis=1 --download-metis=1 --download-hypre=1 > --download-triangle=1 --download-p4est=1 --download-zlib --with-x=0 > --download-ctetgen --with-debugging=0 --download-hdf5=1 > PETSC_ARCH=arch-macosx-gnu-O-omp --with-openmp=1 --with-log=0 > --with-threadsafety --download-chaco > <<< > > -I"$(brew --prefix libomp)/include" type options to configure doesn't make > sense. You are using bash syntax here - and expecting configure to resolve > it. Its best for your bash shell to evaluate this before passing this info > to configure > > Also --download-zlib isn't needed on OSX > Hum, I get: 12:52 mark/feature-xgc-interface-rebase *= ~/Codes/petsc$ ../arch-macosx-gnu-O-omp.py =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: configureExternalPackagesDir from config.framework(config/BuildSystem/config/framework.py:911) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Package p4est requested but dependency zlib not requested. Perhaps you want --download-zlib ******************************************************************************* > > Satish > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > Now that I look at it, I see: > > > > CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 -g > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > --prefix libomp)/lib -lomp"" > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > Note the two ". That does not look right. I use > > > > 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', > > > > I know how to do stuff like: > > > > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + > > '/lib64 -lblas -llapack' > > > > Is there like and os.exec that I could use like this for my FLAGS? > > > > > > > > On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley > wrote: > > > > > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams wrote: > > > > > >> I get this error configuring zlib, osx, with OpenMP. > > >> Any ideas? > > >> > > > > > > This failed without output > > > > > > Executing: cd > > > > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > > > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" > CFLAGS="-fstack-protector > > > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib > -lomp"" > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > ./configure && > > > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > > > > > So execute each step in turn and see what fails. > > > > > > Thanks, > > > > > > Matt > > > > > > > > >> Thanks, > > >> Mark > > >> > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > > > experiments is infinitely more interesting than any results to which > their > > > experiments lead. > > > -- Norbert Wiener > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 13 12:08:05 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 13 Apr 2020 12:08:05 -0500 (CDT) Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: you haven't sent any logs for this issue.. [../arch-macosx-gnu-O-omp.py script or configure.log with the failure] Satish ------- ipro:petsc balay$ ./configure --with-fortran-bindings=0 --with-mpi=0 --with-zlib=1 =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== ***** WARNING: You have an older version of Gnu make, it will work, but may not support all the parallel testing options. You can install the latest Gnu make with your package manager, such as brew or macports, or use the --download-make option to get the latest Gnu make ***** =============================================================================== Compil ers: C Compiler: gcc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 Version: Apple clang version 11.0.3 (clang-1103.0.32.29) C++ Compiler: g++ -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -fvisibility=hidden -g -std=c++14 Version: Apple clang version 11.0.3 (clang-1103.0.32.29) Fortran Compiler: gfortran -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g Version: GNU Fortran (Homebrew GCC 9.3.0) 9.3.0 Linkers: Shared linker: gcc -dynamiclib -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 Dynamic linker: gcc -dynamiclib -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 Libraries linked against: -lc++ -ldl make: Version: 3.81 /usr/bin/make BlasLapack: Library: -llapack -lblas Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) uses 4 byte integers pthread: zlib: Library: -lz cmake: Version: 3.16.5 /usr/local/bin/cmake X: Includes: -I/opt/X11/include Library: -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 regex: Language used to compile PETSc: C PETSc: PETSC_ARCH: arch-darwin-c-debug PETSC_DIR: /Users/balay/petsc Scalar type: real Precision: double Integer size: 4 bytes shared libraries: enabled Memory alignment from malloc(): 16 bytes xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/Users/balay/petsc PETSC_ARCH=arch-darwin-c-debug all xxx=========================================================================xxx ipro:petsc balay$ On Mon, 13 Apr 2020, Mark Adams wrote: > On Mon, Apr 13, 2020 at 12:48 PM Satish Balay wrote: > > > This is very funky > > > > >>> > > Configure Options: --configModules=PETSc.Configure > > --optionsModule=config.compilerOptions > > --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-O2 -g > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > --prefix libomp)/lib -lomp"" CXXOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > FOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > --download-parmetis=1 --download-metis=1 --download-hypre=1 > > --download-triangle=1 --download-p4est=1 --download-zlib --with-x=0 > > --download-ctetgen --with-debugging=0 --download-hdf5=1 > > PETSC_ARCH=arch-macosx-gnu-O-omp --with-openmp=1 --with-log=0 > > --with-threadsafety --download-chaco > > <<< > > > > -I"$(brew --prefix libomp)/include" type options to configure doesn't make > > sense. You are using bash syntax here - and expecting configure to resolve > > it. Its best for your bash shell to evaluate this before passing this info > > to configure > > > > Also --download-zlib isn't needed on OSX > > > > Hum, I get: > > 12:52 mark/feature-xgc-interface-rebase *= ~/Codes/petsc$ > ../arch-macosx-gnu-O-omp.py > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > TESTING: configureExternalPackagesDir from > config.framework(config/BuildSystem/config/framework.py:911) > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > Package p4est requested but dependency zlib not requested. Perhaps you want > --download-zlib > ******************************************************************************* > > > > > > > Satish > > > > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > > > Now that I look at it, I see: > > > > > > CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 -g > > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > > --prefix libomp)/lib -lomp"" > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > > > Note the two ". That does not look right. I use > > > > > > 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', > > > > > > I know how to do stuff like: > > > > > > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + > > > '/lib64 -lblas -llapack' > > > > > > Is there like and os.exec that I could use like this for my FLAGS? > > > > > > > > > > > > On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley > > wrote: > > > > > > > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams wrote: > > > > > > > >> I get this error configuring zlib, osx, with OpenMP. > > > >> Any ideas? > > > >> > > > > > > > > This failed without output > > > > > > > > Executing: cd > > > > > > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > > > > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" > > CFLAGS="-fstack-protector > > > > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp > > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib > > -lomp"" > > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > ./configure && > > > > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > > > > > > > So execute each step in turn and see what fails. > > > > > > > > Thanks, > > > > > > > > Matt > > > > > > > > > > > >> Thanks, > > > >> Mark > > > >> > > > > > > > > > > > > -- > > > > What most experimenters take for granted before they begin their > > > > experiments is infinitely more interesting than any results to which > > their > > > > experiments lead. > > > > -- Norbert Wiener > > > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Mon Apr 13 12:19:47 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 13 Apr 2020 12:19:47 -0500 (CDT) Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: And here is a p4est build. Satish -------- balay at kpro petsc % ./configure --with-mpi-dir=$HOME/soft/mpich-3.3.2 --with-zlib=1 --download-p4est =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== ***** WARNING: You have an older version of Gnu make, it will work, but may not support all the parallel testing options. You can install the latest Gnu make with your package manager, such as brew or macports, or use the --download-make option to get the latest Gnu make ***** =============================================================================== ====== ========================================================================= Trying to download git://https://bitbucket.org/petsc/pkg-sowing.git for SOWING =============================================================================== =============================================================================== Running configure on SOWING; this may take several minutes =============================================================================== =========== ==================================================================== Running make on SOWING; this may take several minutes =============================================================================== =============================================================================== Running make install on SOWING; this may take several minutes =============================================================================== ================ =============================================================== Trying to download git://https://github.com/tisaac/p4est for P4EST =============================================================================== =============================================================================== Trying to bootstrap p4est using autotools; this may take several minutes =============================================================================== ===================== ========================================================== Running configure on P4EST; this may take several minutes =============================================================================== =============================================================================== Running make on P4EST; this may take several minutes =============================================================================== ========================== ===================================================== Running make install on P4EST; this may take several minutes =============================================================================== Compilers: C Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 Version: Apple clang version 11.0.0 (clang-1100.0.33.8) C++ Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -fvisibility=hidden -g Version: Apple clang version 11.0.0 (clang-1100.0.33.8) Fortran Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpif90 -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g Version: GNU Fortran (Homebrew GCC 9.3.0) 9.3.0 Linkers: Shared linker: /Users/balay/soft/mpich-3.3.2/bin/mpicc -dynamiclib -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 Dynamic linker: /Users/balay/soft/mpich-3.3.2/bin/mpicc -dynamiclib -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 Libraries linked against: -lc++ -ldl make: Version: 3.81 /usr/bin/make BlasLapack: Library: -llapack -lblas Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) uses 4 byte integers MPI: Version: 3 Includes: -I/Users/balay/soft/mpich-3.3.2/include Mpiexec: /Users/balay/soft/mpich-3.3.2/bin/mpiexec MPICH_NUMVERSION: 30302300 pthread: X: Includes: -I/opt/X11/include Library: -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 zlib: Library: -lz cmake: Version: 3.16.5 /usr/local/bin/cmake regex: p4est: Includes: -I/Users/balay/petsc/arch-darwin-c-debug/include Library: -Wl,-rpath,/Users/balay/petsc/arch-darwin-c-debug/lib -L/Users/balay/petsc/arch-darwin-c-debug/lib -lp4est -lsc sowing: Version: 1.1.25 /Users/balay/petsc/arch-darwin-c-debug/bin/bfort Language used to compile PETSc: C PETSc: PETSC_ARCH: arch-darwin-c-debug PETSC_DIR: /Users/balay/petsc Scalar type: real Precision: double Integer size: 4 bytes shared libraries: enabled Memory alignment from malloc(): 16 bytes xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/Users/balay/petsc PETSC_ARCH=arch-darwin-c-debug all xxx=========================================================================xxx balay at kpro petsc % On Mon, 13 Apr 2020, Satish Balay via petsc-users wrote: > you haven't sent any logs for this issue.. > [../arch-macosx-gnu-O-omp.py script or configure.log with the failure] > > Satish > > ------- > ipro:petsc balay$ ./configure --with-fortran-bindings=0 --with-mpi=0 --with-zlib=1 > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > =============================================================================== ***** WARNING: You have an older version of Gnu make, it will work, but may not support all the parallel testing options. You can install the latest Gnu make with your package manager, such as brew or macports, or use the --download-make option to get the latest Gnu make ***** =============================================================================== Comp il > ers: > C Compiler: gcc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > Version: Apple clang version 11.0.3 (clang-1103.0.32.29) > C++ Compiler: g++ -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -fvisibility=hidden -g -std=c++14 > Version: Apple clang version 11.0.3 (clang-1103.0.32.29) > Fortran Compiler: gfortran -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g > Version: GNU Fortran (Homebrew GCC 9.3.0) 9.3.0 > Linkers: > Shared linker: gcc -dynamiclib -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > Dynamic linker: gcc -dynamiclib -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > Libraries linked against: -lc++ -ldl > make: > Version: 3.81 > /usr/bin/make > BlasLapack: > Library: -llapack -lblas > Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) > uses 4 byte integers > pthread: > zlib: > Library: -lz > cmake: > Version: 3.16.5 > /usr/local/bin/cmake > X: > Includes: -I/opt/X11/include > Library: -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 > regex: > Language used to compile PETSc: C > PETSc: > PETSC_ARCH: arch-darwin-c-debug > PETSC_DIR: /Users/balay/petsc > Scalar type: real > Precision: double > Integer size: 4 bytes > shared libraries: enabled > Memory alignment from malloc(): 16 bytes > xxx=========================================================================xxx > Configure stage complete. Now build PETSc libraries with: > make PETSC_DIR=/Users/balay/petsc PETSC_ARCH=arch-darwin-c-debug all > xxx=========================================================================xxx > ipro:petsc balay$ > > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > On Mon, Apr 13, 2020 at 12:48 PM Satish Balay wrote: > > > > > This is very funky > > > > > > >>> > > > Configure Options: --configModules=PETSc.Configure > > > --optionsModule=config.compilerOptions > > > --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-O2 -g > > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > > --prefix libomp)/lib -lomp"" CXXOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > > FOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > > --download-parmetis=1 --download-metis=1 --download-hypre=1 > > > --download-triangle=1 --download-p4est=1 --download-zlib --with-x=0 > > > --download-ctetgen --with-debugging=0 --download-hdf5=1 > > > PETSC_ARCH=arch-macosx-gnu-O-omp --with-openmp=1 --with-log=0 > > > --with-threadsafety --download-chaco > > > <<< > > > > > > -I"$(brew --prefix libomp)/include" type options to configure doesn't make > > > sense. You are using bash syntax here - and expecting configure to resolve > > > it. Its best for your bash shell to evaluate this before passing this info > > > to configure > > > > > > Also --download-zlib isn't needed on OSX > > > > > > > Hum, I get: > > > > 12:52 mark/feature-xgc-interface-rebase *= ~/Codes/petsc$ > > ../arch-macosx-gnu-O-omp.py > > =============================================================================== > > Configuring PETSc to compile on your system > > > > =============================================================================== > > TESTING: configureExternalPackagesDir from > > config.framework(config/BuildSystem/config/framework.py:911) > > > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > ------------------------------------------------------------------------------- > > Package p4est requested but dependency zlib not requested. Perhaps you want > > --download-zlib > > ******************************************************************************* > > > > > > > > > > > > Satish > > > > > > > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > > > > > Now that I look at it, I see: > > > > > > > > CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 -g > > > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > > > --prefix libomp)/lib -lomp"" > > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > > > > > Note the two ". That does not look right. I use > > > > > > > > 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', > > > > > > > > I know how to do stuff like: > > > > > > > > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + > > > > '/lib64 -lblas -llapack' > > > > > > > > Is there like and os.exec that I could use like this for my FLAGS? > > > > > > > > > > > > > > > > On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley > > > wrote: > > > > > > > > > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams wrote: > > > > > > > > > >> I get this error configuring zlib, osx, with OpenMP. > > > > >> Any ideas? > > > > >> > > > > > > > > > > This failed without output > > > > > > > > > > Executing: cd > > > > > > > > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > > > > > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" > > > CFLAGS="-fstack-protector > > > > > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor -fopenmp > > > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib > > > -lomp"" > > > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > ./configure && > > > > > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > > > > > > > > > So execute each step in turn and see what fails. > > > > > > > > > > Thanks, > > > > > > > > > > Matt > > > > > > > > > > > > > > >> Thanks, > > > > >> Mark > > > > >> > > > > > > > > > > > > > > > -- > > > > > What most experimenters take for granted before they begin their > > > > > experiments is infinitely more interesting than any results to which > > > their > > > > > experiments lead. > > > > > -- Norbert Wiener > > > > > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > > > > > > > > > > > > > > > From mfadams at lbl.gov Mon Apr 13 14:02:47 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 13 Apr 2020 15:02:47 -0400 Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: Ah, you have zlib=1. Now hdf5 fails. On Mon, Apr 13, 2020 at 1:19 PM Satish Balay wrote: > And here is a p4est build. > > Satish > -------- > > balay at kpro petsc % ./configure --with-mpi-dir=$HOME/soft/mpich-3.3.2 > --with-zlib=1 --download-p4est > > =============================================================================== > Configuring PETSc to compile on your system > > > =============================================================================== > =============================================================================== > > ***** WARNING: You have an older version of Gnu make, it will > work, > but may not support all the parallel testing > options. You can install the > latest Gnu make with your > package manager, such as brew or macports, or use > the > --download-make option to get the latest Gnu make ***** > > > =============================================================================== > > ====== > ========================================================================= > > Trying to download git:// > https://bitbucket.org/petsc/pkg-sowing.git for SOWING > > =============================================================================== > > > =============================================================================== > > Running configure on SOWING; this may take several minutes > > > =============================================================================== > > =========== > ==================================================================== > > Running make on SOWING; this may take several minutes > > > =============================================================================== > > > =============================================================================== > > Running make install on SOWING; this may take several minutes > > > =============================================================================== > > ================ > =============================================================== > > Trying to download git://https://github.com/tisaac/p4est for P4EST > > > =============================================================================== > > > =============================================================================== > > Trying to bootstrap p4est using autotools; this may take > several minutes > > =============================================================================== > > ===================== > ========================================================== > > Running configure on P4EST; this may take several minutes > > > =============================================================================== > > > =============================================================================== > > Running make on P4EST; this may take several minutes > > > =============================================================================== > > ========================== > ===================================================== > > Running make install on P4EST; this may take several minutes > > > =============================================================================== > > Compilers: > > > C Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpicc -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > Version: Apple clang version 11.0.0 (clang-1100.0.33.8) > C++ Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpicxx -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fno-stack-check -fvisibility=hidden -g > Version: Apple clang version 11.0.0 (clang-1100.0.33.8) > Fortran Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpif90 -Wall > -ffree-line-length-0 -Wno-unused-dummy-argument -g > Version: GNU Fortran (Homebrew GCC 9.3.0) 9.3.0 > Linkers: > Shared linker: /Users/balay/soft/mpich-3.3.2/bin/mpicc -dynamiclib > -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > Dynamic linker: /Users/balay/soft/mpich-3.3.2/bin/mpicc -dynamiclib > -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > Libraries linked against: -lc++ -ldl > make: > Version: 3.81 > /usr/bin/make > BlasLapack: > Library: -llapack -lblas > Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> > yourprogram -log_view) > uses 4 byte integers > MPI: > Version: 3 > Includes: -I/Users/balay/soft/mpich-3.3.2/include > Mpiexec: /Users/balay/soft/mpich-3.3.2/bin/mpiexec > MPICH_NUMVERSION: 30302300 > pthread: > X: > Includes: -I/opt/X11/include > Library: -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 > zlib: > Library: -lz > cmake: > Version: 3.16.5 > /usr/local/bin/cmake > regex: > p4est: > Includes: -I/Users/balay/petsc/arch-darwin-c-debug/include > Library: -Wl,-rpath,/Users/balay/petsc/arch-darwin-c-debug/lib > -L/Users/balay/petsc/arch-darwin-c-debug/lib -lp4est -lsc > sowing: > Version: 1.1.25 > /Users/balay/petsc/arch-darwin-c-debug/bin/bfort > Language used to compile PETSc: C > PETSc: > PETSC_ARCH: arch-darwin-c-debug > PETSC_DIR: /Users/balay/petsc > Scalar type: real > Precision: double > Integer size: 4 bytes > shared libraries: enabled > Memory alignment from malloc(): 16 bytes > > xxx=========================================================================xxx > Configure stage complete. Now build PETSc libraries with: > make PETSC_DIR=/Users/balay/petsc PETSC_ARCH=arch-darwin-c-debug all > > xxx=========================================================================xxx > balay at kpro petsc % > > > On Mon, 13 Apr 2020, Satish Balay via petsc-users wrote: > > > you haven't sent any logs for this issue.. > > [../arch-macosx-gnu-O-omp.py script or configure.log with the failure] > > > > Satish > > > > ------- > > ipro:petsc balay$ ./configure --with-fortran-bindings=0 --with-mpi=0 > --with-zlib=1 > > > =============================================================================== > > Configuring PETSc to compile on your system > > > > =============================================================================== > > > =============================================================================== > > ***** WARNING: You have an older version of Gnu make, it will > work, > but may not support all the parallel testing > options. You can install the > latest Gnu make with your > package manager, such as brew or macports, or use > the > --download-make option to get the latest Gnu make ***** > > > =============================================================================== > > Comp > il > > ers: > > > > C Compiler: gcc -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments > -fvisibility=hidden -g3 > > Version: Apple clang version 11.0.3 (clang-1103.0.32.29) > > C++ Compiler: g++ -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -fstack-protector -fno-stack-check -fvisibility=hidden > -g -std=c++14 > > Version: Apple clang version 11.0.3 (clang-1103.0.32.29) > > Fortran Compiler: gfortran -Wall -ffree-line-length-0 > -Wno-unused-dummy-argument -g > > Version: GNU Fortran (Homebrew GCC 9.3.0) 9.3.0 > > Linkers: > > Shared linker: gcc -dynamiclib -single_module -undefined > dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > > Dynamic linker: gcc -dynamiclib -single_module -undefined > dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > > Libraries linked against: -lc++ -ldl > > make: > > Version: 3.81 > > /usr/bin/make > > BlasLapack: > > Library: -llapack -lblas > > Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> > yourprogram -log_view) > > uses 4 byte integers > > pthread: > > zlib: > > Library: -lz > > cmake: > > Version: 3.16.5 > > /usr/local/bin/cmake > > X: > > Includes: -I/opt/X11/include > > Library: -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 > > regex: > > Language used to compile PETSc: C > > PETSc: > > PETSC_ARCH: arch-darwin-c-debug > > PETSC_DIR: /Users/balay/petsc > > Scalar type: real > > Precision: double > > Integer size: 4 bytes > > shared libraries: enabled > > Memory alignment from malloc(): 16 bytes > > > xxx=========================================================================xxx > > Configure stage complete. Now build PETSc libraries with: > > make PETSC_DIR=/Users/balay/petsc PETSC_ARCH=arch-darwin-c-debug all > > > xxx=========================================================================xxx > > ipro:petsc balay$ > > > > > > > > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > > > On Mon, Apr 13, 2020 at 12:48 PM Satish Balay > wrote: > > > > > > > This is very funky > > > > > > > > >>> > > > > Configure Options: --configModules=PETSc.Configure > > > > --optionsModule=config.compilerOptions > > > > --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-O2 -g > > > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > > > --prefix libomp)/lib -lomp"" CXXOPTFLAGS="-O2 -g -Xpreprocessor > -fopenmp > > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib > -lomp"" > > > > FOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > > > --download-parmetis=1 --download-metis=1 --download-hypre=1 > > > > --download-triangle=1 --download-p4est=1 --download-zlib --with-x=0 > > > > --download-ctetgen --with-debugging=0 --download-hdf5=1 > > > > PETSC_ARCH=arch-macosx-gnu-O-omp --with-openmp=1 --with-log=0 > > > > --with-threadsafety --download-chaco > > > > <<< > > > > > > > > -I"$(brew --prefix libomp)/include" type options to configure > doesn't make > > > > sense. You are using bash syntax here - and expecting configure to > resolve > > > > it. Its best for your bash shell to evaluate this before passing > this info > > > > to configure > > > > > > > > Also --download-zlib isn't needed on OSX > > > > > > > > > > Hum, I get: > > > > > > 12:52 mark/feature-xgc-interface-rebase *= ~/Codes/petsc$ > > > ../arch-macosx-gnu-O-omp.py > > > > =============================================================================== > > > Configuring PETSc to compile on your system > > > > > > > =============================================================================== > > > TESTING: configureExternalPackagesDir from > > > config.framework(config/BuildSystem/config/framework.py:911) > > > > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for > > > details): > > > > ------------------------------------------------------------------------------- > > > Package p4est requested but dependency zlib not requested. Perhaps you > want > > > --download-zlib > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > > > > > > > Now that I look at it, I see: > > > > > > > > > > CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 > -g > > > > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" > -L"$(brew > > > > > --prefix libomp)/lib -lomp"" > > > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > > > > > > > Note the two ". That does not look right. I use > > > > > > > > > > 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > > > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', > > > > > > > > > > I know how to do stuff like: > > > > > > > > > > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] > + > > > > > '/lib64 -lblas -llapack' > > > > > > > > > > Is there like and os.exec that I could use like this for my FLAGS? > > > > > > > > > > > > > > > > > > > > On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley < > knepley at gmail.com> > > > > wrote: > > > > > > > > > > > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams > wrote: > > > > > > > > > > > >> I get this error configuring zlib, osx, with OpenMP. > > > > > >> Any ideas? > > > > > >> > > > > > > > > > > > > This failed without output > > > > > > > > > > > > Executing: cd > > > > > > > > > > > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > > > > > > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" > > > > CFLAGS="-fstack-protector > > > > > > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor > -fopenmp > > > > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix > libomp)/lib > > > > -lomp"" > > > > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > ./configure && > > > > > > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > > > > > > > > > > > So execute each step in turn and see what fails. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Matt > > > > > > > > > > > > > > > > > >> Thanks, > > > > > >> Mark > > > > > >> > > > > > > > > > > > > > > > > > > -- > > > > > > What most experimenters take for granted before they begin their > > > > > > experiments is infinitely more interesting than any results to > which > > > > their > > > > > > experiments lead. > > > > > > -- Norbert Wiener > > > > > > > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1416008 bytes Desc: not available URL: From hu.ds.abel at icloud.com Mon Apr 13 22:50:29 2020 From: hu.ds.abel at icloud.com (huabel) Date: Tue, 14 Apr 2020 11:50:29 +0800 Subject: [petsc-users] AMD GPU card on OS X Message-ID: <424D014D-49F7-4B1F-B4D5-E74A69F2EC34@icloud.com> Dear PETSc users, Does PETSc suppurt AMD GPU card (such as RX 5700) on OS X? Thanks Abel Hu From rupp at iue.tuwien.ac.at Mon Apr 13 22:54:17 2020 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Tue, 14 Apr 2020 05:54:17 +0200 Subject: [petsc-users] AMD GPU card on OS X In-Reply-To: <424D014D-49F7-4B1F-B4D5-E74A69F2EC34@icloud.com> References: <424D014D-49F7-4B1F-B4D5-E74A69F2EC34@icloud.com> Message-ID: Hi Abel, try the ViennaCL backend. It used to work in the past, but I don't know the current state of AMD drivers and OpenCL on OS X. Best regards, Karli On 4/14/20 5:50 AM, huabel via petsc-users wrote: > Dear PETSc users, > > Does PETSc suppurt AMD GPU card (such as RX 5700) on OS X? > > Thanks > Abel Hu > From paeanball at gmail.com Tue Apr 14 04:18:50 2020 From: paeanball at gmail.com (Bao Kai) Date: Tue, 14 Apr 2020 11:18:50 +0200 Subject: [petsc-users] Status about using PETSC with MATLAB Message-ID: Hi, I saw some discussion in the mailing list, while not a lot. I am wondering the current status about using PETSc with MATLAB before I dig in. To be short, Would I be able to use the non-linear solver or linear solver with a MATLAB code? For example, I have a simulation code with MATLAB, I want to use/test the non-linear solver or linear solver from PETSc. Is it something doable or supported here? It is mostly for testing and study purposes. The performance is not the main concern here. Thanks. Best Regards, Kai Bao From reuben.hill10 at imperial.ac.uk Tue Apr 14 04:47:08 2020 From: reuben.hill10 at imperial.ac.uk (Hill, Reuben) Date: Tue, 14 Apr 2020 09:47:08 +0000 Subject: [petsc-users] 1D DMSWARM in 1D DMPlex mesh Message-ID: Hi all, I might be missing something obvious, but I can't tell from the documentation if 1D coordinate DMSWARMs (immersed in interval DMPlex meshes) are supported. Does anyone know? I've successfully implemented 2D and 3D coordinate DMSwarms in 2D and 3D DMPlexes using DMSwarmSetPointCoordinates in Firedrake using petsc4py (via swarm.setPointCoordinates. The petsc4py function forces the input numpy array to have 2 dimensions with 1 column per dimension. In the 1D case, where each row of the coordinates array therefore has one column, I get the following error: E petsc4py.PETSc.Error: error code 63 E [0] DMSwarmSetPointCoordinates() line 305 in /Users/rwh10/firedrake/src/petsc/src/dm/impls/swarm/swarmpic.c E [0] DMLocatePoints() line 6499 in /Users/rwh10/firedrake/src/petsc/src/dm/interface/dm.c E [0] DMLocatePoints_Plex() line 744 in /Users/rwh10/firedrake/src/petsc/src/dm/impls/plex/plexgeometry.c E [0] DMPlexLocatePoint_Internal() line 462 in /Users/rwh10/firedrake/src/petsc/src/dm/impls/plex/plexgeometry.c E [0] Argument out of range E [0] No point location for cell 0 with type segment Thanks Reuben Hill -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 14 07:49:33 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Apr 2020 08:49:33 -0400 Subject: [petsc-users] 1D DMSWARM in 1D DMPlex mesh In-Reply-To: References: Message-ID: On Tue, Apr 14, 2020 at 5:49 AM Hill, Reuben wrote: > Hi all, > > I might be missing something obvious, but I can't tell from the > documentation if 1D coordinate DMSWARMs (immersed in interval DMPlex > meshes) are supported. Does anyone know? > > I've successfully implemented 2D and 3D coordinate DMSwarms in 2D and 3D > DMPlexes using DMSwarmSetPointCoordinates in Firedrake using petsc4py (via > swarm.setPointCoordinates. The petsc4py function forces the input numpy > array to have 2 dimensions with 1 column per dimension. In the 1D case, > where each row of the coordinates array therefore has one column, I get the > following error: > > E petsc4py.PETSc.Error: error code 63 > E [0] DMSwarmSetPointCoordinates() line 305 in > /Users/rwh10/firedrake/src/petsc/src/dm/impls/swarm/swarmpic.c > E [0] DMLocatePoints() line 6499 in > /Users/rwh10/firedrake/src/petsc/src/dm/interface/dm.c > E [0] DMLocatePoints_Plex() line 744 in > /Users/rwh10/firedrake/src/petsc/src/dm/impls/plex/plexgeometry.c > E [0] DMPlexLocatePoint_Internal() line 462 in > /Users/rwh10/firedrake/src/petsc/src/dm/impls/plex/plexgeometry.c > E [0] Argument out of range > E [0] No point location for cell 0 with type segment > Point location in 1D was not implemented because no one ever asked for it. Do you need this? or is this just for completeness? Thanks, Matt > Thanks > > Reuben Hill > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajidsyed2021 at u.northwestern.edu Tue Apr 14 11:07:11 2020 From: sajidsyed2021 at u.northwestern.edu (Sajid Ali) Date: Tue, 14 Apr 2020 11:07:11 -0500 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: <87tvfnvi65.fsf@jedbrown.org> References: <87tvfnvi65.fsf@jedbrown.org> Message-ID: Hi Jed/PETSc-developers, My goal is to invert a set of these PDE's to obtain a series of parameters F_t (with TSSolve and TSAdjoint for function/gradient computation). I was planning to use TAO for setting up the inverse problem but given that TAO doesn't support complex scalars, I'm re-thinking about converting this to a real formulation. In https://doi.org/10.1007/s00466-006-0047-8, Mark Adams explains that the K1 formulation of Day/Heroux is well suited to block matrices in PETSc but the PDE described there has an large SPD operator arising out of the underlying elliptic PDE. ( Day/Heroux in their paper claim that K2/K3 formulations are problematic for Krylov solvers due to non ideal eigenvalue spectra created by conversion to real formulation). Now, the system I have is : u_t = A*(u_xx + u_yy) + F_t*u; (A is purely imaginary and F_t is complex, with abs(A/F) ~ 1e-16. The parabolic PDE is converted to a series of TS solves each being elliptic). I implemented the K1/K4 approaches by using DMDA to manage the 2-dof grid instead of setting it up as one large vector of [real,imag] and those didn't converge well either (at least with simple preconditioners). Any pointers as to what I could do to make the real formulation well conditioned ? Or should I not bother with this for now and implement a first order gradient descent method in PETSc (while approximating the regularizer as a cost integrand) ? Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io On Wed, Mar 27, 2019 at 9:36 PM Jed Brown wrote: > When you roll your own equivalent real formulation, PETSc has no way of > knowing what conjugate transpose might mean, thus symmetry is lost. I > would suggest just using the AVX2 implementation for now and putting in > a request (or contributing a patch) for AVX-512 complex optimizations. > > Sajid Ali via petsc-users writes: > > > Hi, > > > > I'm able to solve the following equation using complex numbers (with > > ts_type cn and pc_type gamg) : > > u_t = A*u'' + F_t*u; > > (where A = -1j/(2k) amd u'' refers to u_xx+u_yy implemented with the > > familiar 5-point stencil) > > > > Now, I want to solve the same problem using real numbers. The equivalent > > equations are: > > u_t_real = 1/(2k) * u''_imag + F_real*u_real - F_imag*u_imag > > u_t_imag = -1/(2k) * u''_real + F_imag*u_real - F_real*u_imag > > > > Thus, if we now take our new u vector to have twice the length of the > > problem we're solving, keeping the first half as real and the second half > > as imaginary, we'd get a matrix that had matrices computing the laplacian > > via the 5-point stencil in the top-right and bottom-left corners and a > > diagonal [F_real+F_imag, F_real-F_imag] term. > > > > I tried doing this and the gamg preconditioner complains about an > > unsymmetric matrix. If i use the default preconditioner, I get > > DIVERGED_NONLINEAR_SOLVE. > > > > Is there a way to better organize the matrix ? > > > > PS: I'm trying to do this using only real numbers because I realized that > > the optimized avx-512 kernels for KNL are not implemented for complex > > numbers. Would that be implemented soon ? > > > > Thank You, > > Sajid Ali > > Applied Physics > > Northwestern University > -- Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Tue Apr 14 12:13:45 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Tue, 14 Apr 2020 10:13:45 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: Message-ID: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> Hi Junchao, We have tried your two suggestions but the problem remains. And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce. We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the problem must be elsewhere and not MPI. Give that this is a 64 bit indices build of PETSc, is there some possible incompatibility between PETSc and MPI calls? We are open to any other possible suggestions to try as other than valgrind on thousands of processes we seem to have run out of ideas. Thanks, Randy M. > On Apr 13, 2020, at 8:54 AM, Junchao Zhang wrote: > > > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang > wrote: > Randy, > Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr > Correct: I_MPI_ADJUST_ALLREDUCE=1 > But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. > Thanks. > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: > Dear PETSc users, > > We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. > This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. > > The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: > > [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c > [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c > [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c > [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c > > > Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: > > > /* Post the Isends with the message length-info */ > for (i=0,j=0; i if (ilengths[i]) { > ierr = MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); > j++; > } > } > > We have tried this with Intel MPI 2018, 2019, and mpich, all giving the same problem. > > We suspect there is some limit being set on this cloud cluster on the number of file connections or something, but we don?t know. > > Anyone have any ideas? We are sort of grasping for straws at this point. > > Thanks, Randy M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Tue Apr 14 13:28:10 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 14 Apr 2020 18:28:10 +0000 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: Message-ID: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> On Mar 27, 2019, at 8:07 PM, Sajid Ali via petsc-users > wrote: Hi, I'm able to solve the following equation using complex numbers (with ts_type cn and pc_type gamg) : u_t = A*u'' + F_t*u; (where A = -1j/(2k) amd u'' refers to u_xx+u_yy implemented with the familiar 5-point stencil) Now, I want to solve the same problem using real numbers. The equivalent equations are: u_t_real = 1/(2k) * u''_imag + F_real*u_real - F_imag*u_imag u_t_imag = -1/(2k) * u''_real + F_imag*u_real - F_real*u_imag Thus, if we now take our new u vector to have twice the length of the problem we're solving, keeping the first half as real and the second half as imaginary, we'd get a matrix that had matrices computing the laplacian via the 5-point stencil in the top-right and bottom-left corners and a diagonal [F_real+F_imag, F_real-F_imag] term. I tried doing this and the gamg preconditioner complains about an unsymmetric matrix. If i use the default preconditioner, I get DIVERGED_NONLINEAR_SOLVE. Is there a way to better organize the matrix ? PS: I'm trying to do this using only real numbers because I realized that the optimized avx-512 kernels for KNL are not implemented for complex numbers. Would that be implemented soon ? Can you provide a PETSc log for your code using complex numbers with -ts_type cn and -pc_type gamg? I am doubtful that it could benefit much from AVX optimizations on KNL. Thanks, Hong (Mr.) Thank You, Sajid Ali Applied Physics Northwestern University -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajidsyed2021 at u.northwestern.edu Tue Apr 14 13:42:38 2020 From: sajidsyed2021 at u.northwestern.edu (Sajid Ali) Date: Tue, 14 Apr 2020 13:42:38 -0500 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> Message-ID: Hi Hong, Apologies for creating unnecessary confusion by continuing the old thread instead of creating a new one. While I looked into converting the complex PDE formulation to a real valued formulation in the past hoping for better performance, my concern now is with TAO being incompatible with complex scalars. I would've preferred to keep the complex PDE formulation as is (given that I spent some time tuning it and it works well now) for cost function and gradient evaluation while using TAO for the outer optimization loop. Using TAO has the obvious benefit of defining a multi objective cost function, parametrized as a fit to a series of measurements and a set of regularizers while not having to explicitly worry about differentiating the regularizer or have to think about implementing a good optimization scheme. But if it converting the complex formulation to a real formulation would mean a loss of well conditioned forward solve (and increase in solving time itself), I was wondering if it would be better to keep the complex PDE formulation and write an optimization loop in PETSc while defining the regularizer via a cost integrand. Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 14 14:08:14 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Apr 2020 15:08:14 -0400 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> Message-ID: On Tue, Apr 14, 2020 at 2:44 PM Sajid Ali wrote: > Hi Hong, > > Apologies for creating unnecessary confusion by continuing the old thread > instead of creating a new one. > > While I looked into converting the complex PDE formulation to a real > valued formulation in the past hoping for better performance, my concern > now is with TAO being incompatible with complex scalars. I would've > preferred to keep the complex PDE formulation as is (given that I spent > some time tuning it and it works well now) for cost function and gradient > evaluation while using TAO for the outer optimization loop. > > Using TAO has the obvious benefit of defining a multi objective cost > function, parametrized as a fit to a series of measurements and a set of > regularizers while not having to explicitly worry about differentiating the > regularizer or have to think about implementing a good optimization scheme. > But if it converting the complex formulation to a real formulation would > mean a loss of well conditioned forward solve (and increase in solving time > itself), I was wondering if it would be better to keep the complex PDE > formulation and write an optimization loop in PETSc while defining the > regularizer via a cost integrand. > > What exactly is the problem with TAO and complex? Is it only for some methods? Thanks, Matt > Thank You, > Sajid Ali | PhD Candidate > Applied Physics > Northwestern University > s-sajid-ali.github.io > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Apr 14 14:23:27 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 14 Apr 2020 14:23:27 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> Message-ID: There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I doubted it was the problem. Even if users configure petsc with 64-bit indices, we use PetscMPIInt in MPI calls. So it is not a problem. Try -vecscatter_type mpi1 to restore to the original VecScatter implementation. If the problem still remains, could you provide a test example for me to debug? --Junchao Zhang On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie wrote: > Hi Junchao, > > We have tried your two suggestions but the problem remains. > And the problem seems to be on the MPI_Isend line 117 in > PetscGatherMessageLengths and not MPI_AllReduce. > > We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the > problem must be elsewhere and not MPI. > > Give that this is a 64 bit indices build of PETSc, is there some possible > incompatibility between PETSc and MPI calls? > > We are open to any other possible suggestions to try as other than > valgrind on thousands of processes we seem to have run out of ideas. > > Thanks, Randy M. > > On Apr 13, 2020, at 8:54 AM, Junchao Zhang > wrote: > > > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang > wrote: > >> Randy, >> Someone reported similar problem before. It turned out an Intel MPI >> MPI_Allreduce bug. A workaround is setting the environment variable >> I_MPI_ADJUST_ALLREDUCE=1.arr >> > Correct: I_MPI_ADJUST_ALLREDUCE=1 > >> But you mentioned mpich also had the error. So maybe the problem is >> not the same. So let's try the workaround first. If it doesn't work, add >> another petsc option -build_twosided allreduce, which is a workaround for >> Intel MPI_Ibarrier bugs we met. >> Thanks. >> --Junchao Zhang >> >> >> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie >> wrote: >> >>> Dear PETSc users, >>> >>> We are trying to understand an issue that has come up in running our >>> code on a large cloud cluster with a large number of processes and subcomms. >>> This is code that we use daily on multiple clusters without problems, >>> and that runs valgrind clean for small test problems. >>> >>> The run generates the following messages, but doesn?t crash, just seems >>> to hang with all processes continuing to show activity: >>> >>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>> >>> >>> Looking at line 117 in PetscGatherMessageLengths we find the offending >>> statement is the MPI_Isend: >>> >>> >>> /* Post the Isends with the message length-info */ >>> for (i=0,j=0; i>> if (ilengths[i]) { >>> ierr = >>> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>> j++; >>> } >>> } >>> >>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the >>> same problem. >>> >>> We suspect there is some limit being set on this cloud cluster on the >>> number of file connections or something, but we don?t know. >>> >>> Anyone have any ideas? We are sort of grasping for straws at this point. >>> >>> Thanks, Randy M. >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajidsyed2021 at u.northwestern.edu Tue Apr 14 14:30:35 2020 From: sajidsyed2021 at u.northwestern.edu (Sajid Ali) Date: Tue, 14 Apr 2020 14:30:35 -0500 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> Message-ID: Hi Matthew, The TAO manual states that (preface, page vi) "However, TAO is not compatible with PETSc installations using complex data types." (The tao examples all require !complex builds. When I tried to run them with a petsc build with +complex the compiler complains of incompatible pointer types and the example crashes at runtime) Is there any plan to support TAO with complex scalars ? I had planned to re-use the TS object in an optimization loop with the F vector defined both as a parameter in TS and as the independent variable in the outer TAO loop. Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Tue Apr 14 14:31:54 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Tue, 14 Apr 2020 22:31:54 +0300 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> Message-ID: Tao does not support --with-scalar-type=complex Il Mar 14 Apr 2020, 22:09 Matthew Knepley ha scritto: > On Tue, Apr 14, 2020 at 2:44 PM Sajid Ali < > sajidsyed2021 at u.northwestern.edu> wrote: > >> Hi Hong, >> >> Apologies for creating unnecessary confusion by continuing the old thread >> instead of creating a new one. >> >> While I looked into converting the complex PDE formulation to a real >> valued formulation in the past hoping for better performance, my concern >> now is with TAO being incompatible with complex scalars. I would've >> preferred to keep the complex PDE formulation as is (given that I spent >> some time tuning it and it works well now) for cost function and gradient >> evaluation while using TAO for the outer optimization loop. >> >> Using TAO has the obvious benefit of defining a multi objective cost >> function, parametrized as a fit to a series of measurements and a set of >> regularizers while not having to explicitly worry about differentiating the >> regularizer or have to think about implementing a good optimization scheme. >> But if it converting the complex formulation to a real formulation would >> mean a loss of well conditioned forward solve (and increase in solving time >> itself), I was wondering if it would be better to keep the complex PDE >> formulation and write an optimization loop in PETSc while defining the >> regularizer via a cost integrand. >> >> > What exactly is the problem with TAO and complex? Is it only for some > methods? > > Thanks, > > Matt > > >> Thank You, >> Sajid Ali | PhD Candidate >> Applied Physics >> Northwestern University >> s-sajid-ali.github.io >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Apr 14 14:50:44 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 14 Apr 2020 15:50:44 -0400 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> Message-ID: First, you need to order your equations (r_0, i_0, r_1, i_1, ...) and then set a block size of two (times the real block size of your equations) in the matrix, for GAMG to work. PETSc can do this for you with fieldsplit. The symmetric stuff that GAMG requaries is just for the (parallel) graph coarsening and you just need to add a parameter where GAMG will symmetrize the graph used for coarsening, not your real matrix. Or you can use a zero threshold. Chebyshev is the default smoother in GAMG. Chebyshev is not well suited to asymmetric matrices. You need to use the right form, which you seem to have a handle on, and if the asymmetry is not too bad cheby might work. Otherwise, I would use gmres or richardson/jacobi with a proper damping parameter. If you use gmres you want to use fgmres as the outer solver. Good luck, this is a tricky business, Mark On Tue, Apr 14, 2020 at 3:33 PM Stefano Zampini wrote: > Tao does not support --with-scalar-type=complex > > Il Mar 14 Apr 2020, 22:09 Matthew Knepley ha scritto: > >> On Tue, Apr 14, 2020 at 2:44 PM Sajid Ali < >> sajidsyed2021 at u.northwestern.edu> wrote: >> >>> Hi Hong, >>> >>> Apologies for creating unnecessary confusion by continuing the old >>> thread instead of creating a new one. >>> >>> While I looked into converting the complex PDE formulation to a real >>> valued formulation in the past hoping for better performance, my concern >>> now is with TAO being incompatible with complex scalars. I would've >>> preferred to keep the complex PDE formulation as is (given that I spent >>> some time tuning it and it works well now) for cost function and gradient >>> evaluation while using TAO for the outer optimization loop. >>> >>> Using TAO has the obvious benefit of defining a multi objective cost >>> function, parametrized as a fit to a series of measurements and a set of >>> regularizers while not having to explicitly worry about differentiating the >>> regularizer or have to think about implementing a good optimization scheme. >>> But if it converting the complex formulation to a real formulation would >>> mean a loss of well conditioned forward solve (and increase in solving time >>> itself), I was wondering if it would be better to keep the complex PDE >>> formulation and write an optimization loop in PETSc while defining the >>> regularizer via a cost integrand. >>> >>> >> What exactly is the problem with TAO and complex? Is it only for some >> methods? >> >> Thanks, >> >> Matt >> >> >>> Thank You, >>> Sajid Ali | PhD Candidate >>> Applied Physics >>> Northwestern University >>> s-sajid-ali.github.io >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Tue Apr 14 17:04:24 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 14 Apr 2020 22:04:24 +0000 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> Message-ID: Sorry for the time travel. As far as I know, optimization over complex-valued parameters is not a well-defined problem. I am not sure how you can develop an optimization algorithm for it. Perhaps our optimization experts have better suggestions in this direction. The real-valued formulation seems to be more promising to me. The preconditioning is hard, but still doable with fieldsplit as Mark mentioned. Hong (Mr.) On Apr 14, 2020, at 1:42 PM, Sajid Ali > wrote: Hi Hong, Apologies for creating unnecessary confusion by continuing the old thread instead of creating a new one. While I looked into converting the complex PDE formulation to a real valued formulation in the past hoping for better performance, my concern now is with TAO being incompatible with complex scalars. I would've preferred to keep the complex PDE formulation as is (given that I spent some time tuning it and it works well now) for cost function and gradient evaluation while using TAO for the outer optimization loop. Using TAO has the obvious benefit of defining a multi objective cost function, parametrized as a fit to a series of measurements and a set of regularizers while not having to explicitly worry about differentiating the regularizer or have to think about implementing a good optimization scheme. But if it converting the complex formulation to a real formulation would mean a loss of well conditioned forward solve (and increase in solving time itself), I was wondering if it would be better to keep the complex PDE formulation and write an optimization loop in PETSc while defining the regularizer via a cost integrand. Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Tue Apr 14 17:25:37 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 15 Apr 2020 01:25:37 +0300 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> Message-ID: <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> Not true in general when you minimize an objective function as a functional of the parameter only For same methods (Newton for example, gradient descent, etc) the state variables do no enter the minimization, so it should be fine to have complex-valued state variables > On Apr 15, 2020, at 1:04 AM, Zhang, Hong via petsc-users wrote: > > Sorry for the time travel. As far as I know, optimization over complex-valued parameters is not a well-defined problem. I am not sure how you can develop an optimization algorithm for it. Perhaps our optimization experts have better suggestions in this direction. > > The real-valued formulation seems to be more promising to me. The preconditioning is hard, but still doable with fieldsplit as Mark mentioned. > > Hong (Mr.) > >> On Apr 14, 2020, at 1:42 PM, Sajid Ali > wrote: >> >> Hi Hong, >> >> Apologies for creating unnecessary confusion by continuing the old thread instead of creating a new one. >> >> While I looked into converting the complex PDE formulation to a real valued formulation in the past hoping for better performance, my concern now is with TAO being incompatible with complex scalars. I would've preferred to keep the complex PDE formulation as is (given that I spent some time tuning it and it works well now) for cost function and gradient evaluation while using TAO for the outer optimization loop. >> >> Using TAO has the obvious benefit of defining a multi objective cost function, parametrized as a fit to a series of measurements and a set of regularizers while not having to explicitly worry about differentiating the regularizer or have to think about implementing a good optimization scheme. But if it converting the complex formulation to a real formulation would mean a loss of well conditioned forward solve (and increase in solving time itself), I was wondering if it would be better to keep the complex PDE formulation and write an optimization loop in PETSc while defining the regularizer via a cost integrand. >> >> Thank You, >> Sajid Ali | PhD Candidate >> Applied Physics >> Northwestern University >> s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 14 17:48:02 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Apr 2020 18:48:02 -0400 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> Message-ID: On Tue, Apr 14, 2020 at 6:26 PM Stefano Zampini wrote: > Not true in general when you minimize an objective function as a > functional of the parameter only > For same methods (Newton for example, gradient descent, etc) the state > variables do no enter the minimization, so it should be fine to have > complex-valued state variables > Yes, this was my thinking. Of course, there are problems which do not work, but I am guessing would could enable the complex build at least for experts. Thanks, Matt > On Apr 15, 2020, at 1:04 AM, Zhang, Hong via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Sorry for the time travel. As far as I know, optimization over > complex-valued parameters is not a well-defined problem. I am not sure how > you can develop an optimization algorithm for it. Perhaps our optimization > experts have better suggestions in this direction. > > The real-valued formulation seems to be more promising to me. The > preconditioning is hard, but still doable with fieldsplit as Mark mentioned. > > Hong (Mr.) > > On Apr 14, 2020, at 1:42 PM, Sajid Ali > wrote: > > Hi Hong, > > Apologies for creating unnecessary confusion by continuing the old thread > instead of creating a new one. > > While I looked into converting the complex PDE formulation to a real > valued formulation in the past hoping for better performance, my concern > now is with TAO being incompatible with complex scalars. I would've > preferred to keep the complex PDE formulation as is (given that I spent > some time tuning it and it works well now) for cost function and gradient > evaluation while using TAO for the outer optimization loop. > > Using TAO has the obvious benefit of defining a multi objective cost > function, parametrized as a fit to a series of measurements and a set of > regularizers while not having to explicitly worry about differentiating the > regularizer or have to think about implementing a good optimization scheme. > But if it converting the complex formulation to a real formulation would > mean a loss of well conditioned forward solve (and increase in solving time > itself), I was wondering if it would be better to keep the complex PDE > formulation and write an optimization loop in PETSc while defining the > regularizer via a cost integrand. > > Thank You, > Sajid Ali | PhD Candidate > Applied Physics > Northwestern University > s-sajid-ali.github.io > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From adener at anl.gov Tue Apr 14 17:52:46 2020 From: adener at anl.gov (Dener, Alp) Date: Tue, 14 Apr 2020 22:52:46 +0000 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> , <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> Message-ID: This is correct. As long as the optimization variables and the objective function, and it?s gradient are real valued, intermediate variables (such as PDE states) can be complex. In principle it is also possible to minimize real valued functions in complex variables by converting to rectangular or polar coordinate space and working with real numbers only (this will double the size of the optimization problem). The same transformation for complex valued functions yields a multi objective optimization problem. There?s no guarantee though that TAO algorithms will work with this out of the box. When PETSc is compiled complex, the above transformation yields TAO solution vectors that are still complex but carry all the information in the real component with zeros in the imaginary. Linear algebra with these vectors may not turn out to be equivalent to the real-compiled counterparts. This potential issue also applies to carrying around complex state variables in a PDE constrained problem. Even though the optimization algorithm never sees them, the PETSc data structures in TAO would still be complex valued with zeros in the imaginary. We?ve never tested TAO this way. Alp On Apr 14, 2020, at 5:26 PM, Stefano Zampini wrote: ? Not true in general when you minimize an objective function as a functional of the parameter only For same methods (Newton for example, gradient descent, etc) the state variables do no enter the minimization, so it should be fine to have complex-valued state variables On Apr 15, 2020, at 1:04 AM, Zhang, Hong via petsc-users > wrote: Sorry for the time travel. As far as I know, optimization over complex-valued parameters is not a well-defined problem. I am not sure how you can develop an optimization algorithm for it. Perhaps our optimization experts have better suggestions in this direction. The real-valued formulation seems to be more promising to me. The preconditioning is hard, but still doable with fieldsplit as Mark mentioned. Hong (Mr.) On Apr 14, 2020, at 1:42 PM, Sajid Ali > wrote: Hi Hong, Apologies for creating unnecessary confusion by continuing the old thread instead of creating a new one. While I looked into converting the complex PDE formulation to a real valued formulation in the past hoping for better performance, my concern now is with TAO being incompatible with complex scalars. I would've preferred to keep the complex PDE formulation as is (given that I spent some time tuning it and it works well now) for cost function and gradient evaluation while using TAO for the outer optimization loop. Using TAO has the obvious benefit of defining a multi objective cost function, parametrized as a fit to a series of measurements and a set of regularizers while not having to explicitly worry about differentiating the regularizer or have to think about implementing a good optimization scheme. But if it converting the complex formulation to a real formulation would mean a loss of well conditioned forward solve (and increase in solving time itself), I was wondering if it would be better to keep the complex PDE formulation and write an optimization loop in PETSc while defining the regularizer via a cost integrand. Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Apr 14 17:53:34 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 14 Apr 2020 16:53:34 -0600 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> Message-ID: <87a73dptld.fsf@jedbrown.org> We'd have complex values in vectors that contain the likes of gradients with respect to (real-valued) parameters so there would likely need to be lots of PetscRealPart() within TAO. It won't just compile if we turn on complex, but these changes should be feasible and is surely a better solution than having users write real-equivalent formulations of their complex-valued PDE solvers. Matthew Knepley writes: > On Tue, Apr 14, 2020 at 6:26 PM Stefano Zampini > wrote: > >> Not true in general when you minimize an objective function as a >> functional of the parameter only >> For same methods (Newton for example, gradient descent, etc) the state >> variables do no enter the minimization, so it should be fine to have >> complex-valued state variables >> > > Yes, this was my thinking. Of course, there are problems which do not work, > but I am guessing would could enable > the complex build at least for experts. > > Thanks, > > Matt > > >> On Apr 15, 2020, at 1:04 AM, Zhang, Hong via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> Sorry for the time travel. As far as I know, optimization over >> complex-valued parameters is not a well-defined problem. I am not sure how >> you can develop an optimization algorithm for it. Perhaps our optimization >> experts have better suggestions in this direction. >> >> The real-valued formulation seems to be more promising to me. The >> preconditioning is hard, but still doable with fieldsplit as Mark mentioned. >> >> Hong (Mr.) >> >> On Apr 14, 2020, at 1:42 PM, Sajid Ali >> wrote: >> >> Hi Hong, >> >> Apologies for creating unnecessary confusion by continuing the old thread >> instead of creating a new one. >> >> While I looked into converting the complex PDE formulation to a real >> valued formulation in the past hoping for better performance, my concern >> now is with TAO being incompatible with complex scalars. I would've >> preferred to keep the complex PDE formulation as is (given that I spent >> some time tuning it and it works well now) for cost function and gradient >> evaluation while using TAO for the outer optimization loop. >> >> Using TAO has the obvious benefit of defining a multi objective cost >> function, parametrized as a fit to a series of measurements and a set of >> regularizers while not having to explicitly worry about differentiating the >> regularizer or have to think about implementing a good optimization scheme. >> But if it converting the complex formulation to a real formulation would >> mean a loss of well conditioned forward solve (and increase in solving time >> itself), I was wondering if it would be better to keep the complex PDE >> formulation and write an optimization loop in PETSc while defining the >> regularizer via a cost integrand. >> >> Thank You, >> Sajid Ali | PhD Candidate >> Applied Physics >> Northwestern University >> s-sajid-ali.github.io >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From hongzhang at anl.gov Tue Apr 14 18:53:36 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 14 Apr 2020 23:53:36 +0000 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> Message-ID: In Sajid?s problem, the optimization variables (F_t in the equation u_t = A*(u_xx + u_yy) + F_t*u) are complex-valued. The gradients should also be complex-valued. The objective function may be real-valued. Hong (Mr.) On Apr 14, 2020, at 5:52 PM, Dener, Alp > wrote: This is correct. As long as the optimization variables and the objective function, and it?s gradient are real valued, intermediate variables (such as PDE states) can be complex. In principle it is also possible to minimize real valued functions in complex variables by converting to rectangular or polar coordinate space and working with real numbers only (this will double the size of the optimization problem). The same transformation for complex valued functions yields a multi objective optimization problem. There?s no guarantee though that TAO algorithms will work with this out of the box. When PETSc is compiled complex, the above transformation yields TAO solution vectors that are still complex but carry all the information in the real component with zeros in the imaginary. Linear algebra with these vectors may not turn out to be equivalent to the real-compiled counterparts. This potential issue also applies to carrying around complex state variables in a PDE constrained problem. Even though the optimization algorithm never sees them, the PETSc data structures in TAO would still be complex valued with zeros in the imaginary. We?ve never tested TAO this way. Alp On Apr 14, 2020, at 5:26 PM, Stefano Zampini > wrote: ? Not true in general when you minimize an objective function as a functional of the parameter only For same methods (Newton for example, gradient descent, etc) the state variables do no enter the minimization, so it should be fine to have complex-valued state variables On Apr 15, 2020, at 1:04 AM, Zhang, Hong via petsc-users > wrote: Sorry for the time travel. As far as I know, optimization over complex-valued parameters is not a well-defined problem. I am not sure how you can develop an optimization algorithm for it. Perhaps our optimization experts have better suggestions in this direction. The real-valued formulation seems to be more promising to me. The preconditioning is hard, but still doable with fieldsplit as Mark mentioned. Hong (Mr.) On Apr 14, 2020, at 1:42 PM, Sajid Ali > wrote: Hi Hong, Apologies for creating unnecessary confusion by continuing the old thread instead of creating a new one. While I looked into converting the complex PDE formulation to a real valued formulation in the past hoping for better performance, my concern now is with TAO being incompatible with complex scalars. I would've preferred to keep the complex PDE formulation as is (given that I spent some time tuning it and it works well now) for cost function and gradient evaluation while using TAO for the outer optimization loop. Using TAO has the obvious benefit of defining a multi objective cost function, parametrized as a fit to a series of measurements and a set of regularizers while not having to explicitly worry about differentiating the regularizer or have to think about implementing a good optimization scheme. But if it converting the complex formulation to a real formulation would mean a loss of well conditioned forward solve (and increase in solving time itself), I was wondering if it would be better to keep the complex PDE formulation and write an optimization loop in PETSc while defining the regularizer via a cost integrand. Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbllm2018 at hotmail.com Wed Apr 15 00:23:49 2020 From: lbllm2018 at hotmail.com (Bin Liu) Date: Wed, 15 Apr 2020 05:23:49 +0000 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: References: Message-ID: Thanks for your example. My problem is resolved. Meanwhile I am wondering, if it is possible to make this example more flexible. I mean what if the columns in each row are different? Is there any way to insert them all together? Regards Bin From: Junchao Zhang [mailto:junchao.zhang at gmail.com] Sent: Monday, 13 April 2020 11:33 PM To: Bin Liu Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] inserting multiple rows together into a matrix Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 m=2; n=3; idxm[] = {2, 4}; idxn[] = {3, 7, 9}; v[6] = {0.1, 0.2, ....}; MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); --Junchao Zhang On Mon, Apr 13, 2020 at 9:59 AM Bin Liu > wrote: Hi all, I know how to insert values in one row into the matrix via routine ?MatSetValues?. I understand I logically should be able to insert multiple rows into the matrix with one call of ?MatSetValues?. However, I am not sure how to do it. I searched in the PETSc mail list and did not find a relevant question answered before. Could anyone help me and give me a simple example code? Regards B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbllm2018 at hotmail.com Wed Apr 15 00:24:58 2020 From: lbllm2018 at hotmail.com (Bin Liu) Date: Wed, 15 Apr 2020 05:24:58 +0000 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: <21BF415B-7D0A-4EBC-B2A8-3023A251EA02@gmail.com> References: <21BF415B-7D0A-4EBC-B2A8-3023A251EA02@gmail.com> Message-ID: Thanks Jacob. In my problem the rows/columns are not contiguous. Your suggestions are also very good. I will think about how to make use it somewhere else in my code. Best Bin From: Jacob Faibussowitsch [mailto:jacob.fai at gmail.com] Sent: Monday, 13 April 2020 11:47 PM To: Junchao Zhang Cc: Bin Liu ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] inserting multiple rows together into a matrix Also if you know that the rows/cols are contiguous (next to each other) in your sparse matrix then it is recommended to use MatSetValuesBlocked as it is more efficient. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) Cell: (312) 694-3391 On Apr 13, 2020, at 10:32 AM, Junchao Zhang > wrote: Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 m=2; n=3; idxm[] = {2, 4}; idxn[] = {3, 7, 9}; v[6] = {0.1, 0.2, ....}; MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); --Junchao Zhang On Mon, Apr 13, 2020 at 9:59 AM Bin Liu > wrote: Hi all, I know how to insert values in one row into the matrix via routine ?MatSetValues?. I understand I logically should be able to insert multiple rows into the matrix with one call of ?MatSetValues?. However, I am not sure how to do it. I searched in the PETSc mail list and did not find a relevant question answered before. Could anyone help me and give me a simple example code? Regards B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From karabelaselias at gmail.com Wed Apr 15 03:36:59 2020 From: karabelaselias at gmail.com (Elias Karabelas) Date: Wed, 15 Apr 2020 10:36:59 +0200 Subject: [petsc-users] Construct Matrix based on row and column values In-Reply-To: References: <87d0932kuu.fsf@jedbrown.org> <3f924d86-114f-bc6c-bd1b-cdeb0c825c33@gmail.com> <87a7472kcb.fsf@jedbrown.org> <2171016d-4c41-840d-7d60-14d1a5c2bd1e@gmail.com> Message-ID: <34d57c45-fa02-2062-05f8-99929f47a3f6@gmail.com> Hi Junchao, This seems to work. Thanks Elias On 08/04/2020 19:02, Junchao Zhang wrote: > Hi, Elias, > ? ?VecScatterToAll is implemented with MPI_Allgatherv. If not large > scale, I guess it won't be a problem.? I?assume you want to assemble a > MATMPIAIJ D with > ? ? ? D_ij = L_ij * (u[i] - u[j]) > ? Since D has the same sparsity pattern as L, we may have (not tested), > > Mat A,B; > const PetscInt *garray,*cols; > VecScatter vscat; > PetscInt i,j,m,n,ncols; > Vec ur; > PetscScalar *ulocal,*uremote,val,*vals; > IS from,to; > ierr = MatMPIAIJGetSeqAIJ(L,&A,&B,&garray);CHKERRQ(ierr); > ierr = MatGetSize(B,&NULL,&n);CHKERRQ(ierr); /* garray[]'s length = n */ > ierr = VecCreateSeq(PETSC_COMM_SELF,n,&ur);CHKERRQ(ierr); /* ur stores > needed off-proc entries of u */ > ierr = ISCreateStride(PETSC_COMM_SELF,n,0,1,&to); > ierr = > ISCreateGeneral(PETSC_COMM_SELF,n,garray,PETSC_COPY_VALUES,&from);CHKERRQ(ierr); > ierr = VecScatterCreate(u,from,ur,to,&vscat);CHKERRQ(ierr); /* vscat > is D's Mvctx, which however is not exposed to users */ > ierr = > VecScatterBegin(vscat,u,ur,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > ierr = > VecScatterEnd(vscat,u,ur,INSERT_VALUES,SCATTER_FORWARD);CHKERRQ(ierr); > ierr = VecGetArrayRead(u,&ulocal);CHKERRQ(ierr); > ierr = VecGetArrayRead(ur,&uremote);CHKERRQ(ierr); > ierr = MatGetOwnershipRange(D,&rstart,NULL);CHKERRQ(ierr); > ierr = MatGetOwnershipRangeColumn(D,&cstart,NULL); > ierr = MatDuplicate(L,&D,MAT_DO_NOT_COPY_VALUES);CHKERRQ(ierr); > ierr = > MatSetOption(D,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); > ierr = MatAssemblyBegin(D,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatGetSize(A,&m,NULL);CHKERRQ(ierr); > for (i=0; i ierr = MatGetRow(A,i,ncols,cols,vals); > for (j=0; j val = vals[j]*(ulocal[i] - ulocal[cols[j]]); > ierr = > MatSetValue(D,rstart+i,cstart+cols[j],val,INSERT_VALUES);CHKERRQ(ierr); > } > ierr = MatGetRow(B,i,ncols,cols,vals); > for (j=0; j val = vals[j]*(ulocal[i] - uremote[cols[j]]); > ierr = > MatSetValue(D,rstart+i,garray[cols[j]],val,INSERT_VALUES);CHKERRQ(ierr); > } > } > ierr = MatAssemblyEnd(D,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = VecRestoreArrayRead(u,&ulocal);CHKERRQ(ierr); > ierr = VecRestoreArrayRead(ur,&uremote);CHKERRQ(ierr); > > --Junchao Zhang > > On Wed, Apr 8, 2020 at 2:17 AM Elias Karabelas > > wrote: > > Dear Jed, > > I'm done implementing my FCT-Solver and it works fairly well on a > small > amount of MPI-Procs. Additionally to your little snippet I have > used a > VecScatterToAll. > > Reason is that the flux correction f_i takes the form Sum_{j} > alpha_ij > r_ij where r_ij is defined on the sparsity pattern of my FEM > Matrix and > alpha_ij is based on two vectors Rp and Rm. So basically I need > off-process values of these vectors to construct alpha which I > made with > a VecScatterToAll. However I guess this will slow down my overall > program quite significant. Any Ideas? > > Best regards > > Elias > > On 23/03/2020 15:53, Jed Brown wrote: > > Thanks; please don't drop the list. > > > > I'd be curious whether this operation is common enough that we > should > > add it to PETSc.? My hesitance has been that people may want many > > different variants when working with systems of equations, for > example. > > > > Elias Karabelas > writes: > > > >> Dear Jed, > >> > >> Yes the Matrix A comes from assembling a FEM-convection-diffusion > >> operator over a tetrahedral mesh. So my matrix graph should be > >> symmetric. Thanks for the snippet > >> > >> On 23/03/2020 15:42, Jed Brown wrote: > >>> Elias Karabelas > writes: > >>> > >>>> Dear Users, > >>>> > >>>> I want to implement a FCT (flux corrected transport) scheme > with PETSc. > >>>> To this end I have amongst other things create a Matrix whose > entries > >>>> are given by > >>>> > >>>> L_ij = -max(0, A_ij, A_ji) for i neq j > >>>> > >>>> L_ii = Sum_{j=0,..n, j neq i} L_ij > >>>> > >>>> where Mat A is an (non-symmetric) Input Matrix created > beforehand. > >>>> > >>>> I was wondering how to do this. My first search brought me to > >>>> > https://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex16.c.html > >>>> > >>>> > >>>> but this just goes over the rows of one matrix to set new > values and now > >>>> I would need to run over the rows and columns of the matrix. > My Idea was > >>>> to just create a transpose of A and do the same but then the > row-layout > >>>> will be different and I can't use the same for loop for A and > AT and > >>>> thus also won't be able to calculate the max's above. > >>> Does your matrix have symmetric nonzero structure?? (It's > typical for > >>> finite element methods.) > >>> > >>> If so, all the indices will match up so I think you can do > something like: > >>> > >>> for (row=rowstart; row >>>? ? ?PetscScalar Lvals[MAX_LEN]; > >>>? ? ?PetscInt diag; > >>>? ? ?MatGetRow(A, row, &ncols, &cols, &vals); > >>>? ? ?MatGetRow(At, row, &ncolst, &colst, &valst); > >>>? ? ?assert(ncols == ncolst); // symmetric structure > >>>? ? ?PetscScalar sum = 0; > >>>? ? ?for (c=0; c >>>? ? ? ?assert(cols[c] == colst[c]); // symmetric structure > >>>? ? ? ?if (cols[c] == row) diag = c; > >>>? ? ? ?else sum -= (Lvals[c] = -max(0, vals[c], valst[c])); > >>>? ? ?} > >>>? ? ?Lvals[diag] = sum; > >>>? ? ?MatSetValues(L, 1, &row, ncols, cols, Lvals, INSERT_VALUES); > >>>? ? ?MatRestoreRow(A, row, &ncols, &cols, &vals); > >>>? ? ?MatRestoreRow(At, row, &ncolst, &colst, &valst); > >>> } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 15 06:00:34 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Apr 2020 07:00:34 -0400 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: References: Message-ID: On Wed, Apr 15, 2020 at 1:24 AM Bin Liu wrote: > Thanks for your example. My problem is resolved. Meanwhile I am wondering, > if it is possible to make this example more flexible. I mean what if the > columns in each row are different? Is there any way to insert them all > together? > No. If the columns are different, you make a separate calls to MatSetValues(). Thanks, Matt > > > Regards > > Bin > > > > *From:* Junchao Zhang [mailto:junchao.zhang at gmail.com] > *Sent:* Monday, 13 April 2020 11:33 PM > *To:* Bin Liu > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] inserting multiple rows together into a > matrix > > > > Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 > > m=2; > > n=3; > > idxm[] = {2, 4}; > > idxn[] = {3, 7, 9}; > > v[6] = {0.1, 0.2, ....}; > > MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); > > > --Junchao Zhang > > > > > > On Mon, Apr 13, 2020 at 9:59 AM Bin Liu wrote: > > Hi all, > > > > I know how to insert values in one row into the matrix via routine > ?MatSetValues?. I understand I logically should be able to insert multiple > rows into the matrix with one call of ?MatSetValues?. However, I am not > sure how to do it. I searched in the PETSc mail list and did not find a > relevant question answered before. Could anyone help me and give me a > simple example code? > > > > Regards > > B. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Apr 15 06:20:26 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 07:20:26 -0400 Subject: [petsc-users] CUDA error Message-ID: I tried using a serial direct solver in cusparse and got bad numerics: -vector_type cuda -mat_type aijcusparse -pc_factor_mat_solver_type cusparse Before I start debugging this I wanted to see if there are any known issues that I should be aware of. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Apr 15 07:24:30 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 15 Apr 2020 15:24:30 +0300 Subject: [petsc-users] CUDA error In-Reply-To: References: Message-ID: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> Mark I have fixed few things in the solver and it is tested with the current master. Can you write a MWE to reproduce the issue? Which version of CUDA and CUSPARSE are you using? I was planning to reorganize the factor code in AIJCUSPARSE in the next days. kl-18967:petsc zampins$ git grep "solver_type cusparse" src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format ell -vec_type cuda -pc_type ilu src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/shallow_water1 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format hyb -vec_type cuda -ksp_type cg -pc_type icc src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg -pc_type ilu src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg -pc_type ilu -pc_factor_mat_ordering_type nd src/ksp/ksp/examples/tutorials/ex46.c: args: -dm_mat_type aijcusparse -dm_vec_type cuda -random_exact_sol -pc_type ilu -pc_factor_mat_solver_type cusparse src/ksp/ksp/examples/tutorials/ex59.c: args: -subdomain_mat_type aijcusparse -physical_pc_bddc_dirichlet_pc_factor_mat_solver_type cusparse src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda src/ksp/ksp/examples/tutorials/ex71.c: args: -pde_type Poisson -cells 7,9,8 -dim 3 -ksp_view -pc_bddc_coarse_redundant_pc_type svd -ksp_error_if_not_converged -pc_bddc_dirichlet_pc_type cholesky -pc_bddc_dirichlet_pc_factor_mat_solver_type cusparse -pc_bddc_dirichlet_pc_factor_mat_ordering_type nd -pc_bddc_neumann_pc_type cholesky -pc_bddc_neumann_pc_factor_mat_solver_type cusparse -pc_bddc_neumann_pc_factor_mat_ordering_type nd -matis_localmat_type aijcusparse src/ksp/ksp/examples/tutorials/ex72.c: args: -f0 ${DATAFILESPATH}/matrices/medium -ksp_monitor_short -ksp_view -mat_view ascii::ascii_info -mat_type aijcusparse -pc_factor_mat_solver_type cusparse -pc_type ilu -vec_type cuda src/snes/examples/tutorials/ex12.c: args: -matis_localmat_type aijcusparse -pc_bddc_dirichlet_pc_factor_mat_solver_type cusparse -pc_bddc_neumann_pc_factor_mat_solver_type cusparse > On Apr 15, 2020, at 2:20 PM, Mark Adams wrote: > > I tried using a serial direct solver in cusparse and got bad numerics: > > -vector_type cuda -mat_type aijcusparse -pc_factor_mat_solver_type cusparse > > Before I start debugging this I wanted to see if there are any known issues that I should be aware of. > > Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From hu.ds.abel at icloud.com Wed Apr 15 08:59:02 2020 From: hu.ds.abel at icloud.com (huabel) Date: Wed, 15 Apr 2020 21:59:02 +0800 Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL Message-ID: Dear Users, I?m try to use petsc3.13 with ViennaCL , when I try to run src/vec/vec/tutorials/ex1.c, I get next error, thanks. dyld: Symbol not found: _MatCreate_MPIAIJViennaCL Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib Expected in: flat namespace in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib [1] 22602 abort ./ex1 ? tutorials git:(master) ? ./ex1 -vec_type viennacl -mat_type aijviennacl dyld: Symbol not found: _MatCreate_MPIAIJViennaCL Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib Expected in: flat namespace in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib [1] 23268 abort ./ex1 -vec_type viennacl -mat_type aijviennacl Thanks Abel Hu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1147965 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 107830 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajidsyed2021 at u.northwestern.edu Wed Apr 15 09:02:51 2020 From: sajidsyed2021 at u.northwestern.edu (Sajid Ali) Date: Wed, 15 Apr 2020 09:02:51 -0500 Subject: [petsc-users] Converting complex PDE to real for KNL performance ? In-Reply-To: References: <482D7D41-40C8-4321-B9B6-B7B30FBA1A76@anl.gov> <232532B5-525A-42A3-B14A-441869801B2F@gmail.com> Message-ID: Hi everyone, As Hong pointed out the optimization variable and gradient are both complex in my use case. Just to give some context, the TS solves the IVP with the parameters representing the refractive indices of the object at a given orientation (Ni orientations in total). The optimization problem to solve is : obtain F such that for each ?i ? (0; ?), obtain y?i = TS(A?i ? F) (where A?i represents a sparse matrix that rotates the F vector by angle ?i.) Thus, a naive implementation for the same would be : for i ? (0; Ni) : - obtain parameters for this orientation by MatMult( A?i*F ) - obtain y?i = TS(A?i ? F) and y'?i = TSAdjointSolve(A?i ? F) (cost function being L2 norm of y?i and actual data) - rotate gradient back by MatMultTranspose(A?i*F) and update F. But in the future I'd have preferred to bunch the Ni misfits (with bounds and regularizers) together as a multi-objective cost function and let TAO handle the parallelization (whereby TAO is initialized with `mpi_comm_world` but each PDE evaluation happens in it's own `sub-comm` and TAO handles the synchronization for updates) and the order of estimations (instead of naive sequential). While I don't know the optimization theory behind it, the current practise in the x-ray community is to model the forward solve using FFT's instead and use algorithmic differentiation to obtain the gradients. My motivation for exploring the use of PDE's is due to (a) Adjoint solves being faster when compared to algorithmic differentiation (b) Multigrid solvers being fast/optimal (c) PDE models being more accurate on downsampled data. PS : @Alp : Could you share the slides/manuscript from the siam pp20 meeting that describes the new multi-objective minimization features in TAO ? Thank You, Sajid Ali | PhD Candidate Applied Physics Northwestern University s-sajid-ali.github.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Apr 15 09:20:45 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 10:20:45 -0400 Subject: [petsc-users] CUDA error In-Reply-To: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> References: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> Message-ID: On Wed, Apr 15, 2020 at 8:24 AM Stefano Zampini wrote: > Mark > > I have fixed few things in the solver and it is tested with the current > master. > I rebased with master over the weekend .... > Can you write a MWE to reproduce the issue? Which version of CUDA and > CUSPARSE are you using? > You can use mark/feature-xgc-interface-rebase branch and add '-mat_type seqaijcusparse -fp_pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format ell -vec_type cuda' to dm/impls/plex/tutorials/ex10.c The first stage, SNES solve, actually looks OK here. Maybe. Thanks, 10:01 mark/feature-xgc-interface-rebase *= ~/petsc$ make -f gmakefile test search='dm_impls_plex_tutorials-ex10_0' /usr/bin/python /ccs/home/adams/petsc/config/gmakegentest.py --petsc-dir=/ccs/home/adams/petsc --petsc-arch=arch-summit-opt64-gnu-cuda --testdir=./arch-summit-opt64-gnu-cuda/tests Using MAKEFLAGS: search=dm_impls_plex_tutorials-ex10_0 CC arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10.o CLINKER arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10 TEST arch-summit-opt64-gnu-cuda/tests/counts/dm_impls_plex_tutorials-ex10_0.counts ok dm_impls_plex_tutorials-ex10_0 not ok diff-dm_impls_plex_tutorials-ex10_0 # Error code: 1 # 14,16c14,16 # < 0 SNES Function norm 6.184233768573e-04 # < 1 SNES Function norm 1.467479466750e-08 # < 2 SNES Function norm 7.863111141350e-12 # --- # > 0 SNES Function norm 6.184233768572e-04 # > 1 SNES Function norm 1.467479466739e-08 # > 2 SNES Function norm 7.863102870090e-12 # 18,31c18,256 # < 0 SNES Function norm 6.182952107532e-04 # < 1 SNES Function norm 7.336382211149e-09 # < 2 SNES Function norm 1.566979901443e-11 # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 # < 0 SNES Function norm 6.183592738545e-04 # < 1 SNES Function norm 7.337681407420e-09 # < 2 SNES Function norm 1.408823933908e-11 # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 0.0396569, accepting step of size 1e-06 # < 1 TS dt 1.25e-06 time 1e-06 # < 1) species-0: charge density= -1.6024814608984e+01 z-momentum= 2.0080682964364e-19 energy= 1.2018000284846e+05 # < 1) species-1: charge density= 1.6021676653316e+01 z-momentum= 1.4964483981137e-17 energy= 1.2017223215083e+05 # < 1) species-2: charge density= 2.8838441139703e-03 z-momentum= -1.1062018110807e-23 energy= 1.2019641370376e-03 # < 1) Total: charge density= -2.5411155383649e-04, momentum= 1.5165279748763e-17, energy= 2.4035223620125e+05 (m_i[0]/m_e = 3670.94, 140 cells), 1 sub threads # --- # > 0 SNES Function norm 6.182952107531e-04 # > 1 SNES Function norm 6.181600164904e-04 # > 2 SNES Function norm 6.180249471739e-04 # > 3 SNES Function norm 6.178899987549e-04 > I was planning to reorganize the factor code in AIJCUSPARSE in the next > days. > > kl-18967:petsc zampins$ git grep "solver_type cusparse" > src/ksp/ksp/examples/tests/ex43.c: args: -f > ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type > cusparse* -mat_cusparse_storage_format ell -vec_type cuda -pc_type ilu > src/ksp/ksp/examples/tests/ex43.c: args: -f > ${DATAFILESPATH}/matrices/shallow_water1 -mat_type seqaijcusparse > -pc_factor_mat_*solver_type cusparse* -mat_cusparse_storage_format hyb > -vec_type cuda -ksp_type cg -pc_type icc > src/ksp/ksp/examples/tests/ex43.c: args: -f > ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type > cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg > -pc_type ilu > src/ksp/ksp/examples/tests/ex43.c: args: -f > ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type > cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg > -pc_type ilu -pc_factor_mat_ordering_type nd > src/ksp/ksp/examples/tutorials/ex46.c: args: -dm_mat_type > aijcusparse -dm_vec_type cuda -random_exact_sol -pc_type ilu -pc_factor_mat_*solver_type > cusparse* > src/ksp/ksp/examples/tutorials/ex59.c: args: -subdomain_mat_type > aijcusparse -physical_pc_bddc_dirichlet_pc_factor_mat_*solver_type > cusparse* > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short > -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* -vec_type > cuda -sub_ksp_type preonly -sub_pc_type ilu > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short > -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* -vec_type > cuda -sub_ksp_type preonly -sub_pc_type ilu > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short > -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* -vec_type > cuda > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short > -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* -vec_type > cuda > src/ksp/ksp/examples/tutorials/ex71.c: args: -pde_type Poisson -cells > 7,9,8 -dim 3 -ksp_view -pc_bddc_coarse_redundant_pc_type svd > -ksp_error_if_not_converged -pc_bddc_dirichlet_pc_type cholesky > -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* > -pc_bddc_dirichlet_pc_factor_mat_ordering_type nd -pc_bddc_neumann_pc_type > cholesky -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* > -pc_bddc_neumann_pc_factor_mat_ordering_type nd -matis_localmat_type > aijcusparse > src/ksp/ksp/examples/tutorials/ex72.c: args: -f0 > ${DATAFILESPATH}/matrices/medium -ksp_monitor_short -ksp_view -mat_view > ascii::ascii_info -mat_type aijcusparse -pc_factor_mat_*solver_type > cusparse* -pc_type ilu -vec_type cuda > src/snes/examples/tutorials/ex12.c: args: -matis_localmat_type > aijcusparse -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* > -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* > > On Apr 15, 2020, at 2:20 PM, Mark Adams wrote: > > I tried using a serial direct solver in cusparse and got bad numerics: > > -vector_type cuda -mat_type aijcusparse -pc_factor_mat_solver_type > cusparse > > Before I start debugging this I wanted to see if there are any known > issues that I should be aware of. > > Thanks, > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Wed Apr 15 10:28:55 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 15 Apr 2020 08:28:55 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> Message-ID: <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> Hi Junchao, So I was able to create a small test code that duplicates the issue we have been having, and it is attached to this email in a zip file. Included is the test.F90 code, the commands to duplicate crash and to duplicate a successful run, output errors, and our petsc configuration. Our findings to date include: The error is reproducible in a very short time with this script It is related to nproc*nsubs and (although to a less extent) to DM grid size It happens regardless of MPI implementation (mpich, intel mpi 2018, 2019, openmpi) or compiler (gfortran/gcc , intel 2018) No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to slightly increase the limit, but still fails on the full machine set. Nothing looks interesting on valgrind Our initial tests were carried out on an Azure cluster, but we also tested on our smaller cluster, and we found the following: Works: $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 Crashes (this works on Azure) $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 So it looks like it may also be related to the physical number of nodes as well. In any case, even with 2560 processes on 192 cores the memory does not go above 3.5 Gbyes so you don?t need a huge cluster to test. Thanks, Randy M. > On Apr 14, 2020, at 12:23 PM, Junchao Zhang wrote: > > There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I doubted it was the problem. Even if users configure petsc with 64-bit indices, we use PetscMPIInt in MPI calls. So it is not a problem. > Try -vecscatter_type mpi1 to restore to the original VecScatter implementation. If the problem still remains, could you provide a test example for me to debug? > > --Junchao Zhang > > > On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie > wrote: > Hi Junchao, > > We have tried your two suggestions but the problem remains. > And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce. > > We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the problem must be elsewhere and not MPI. > > Give that this is a 64 bit indices build of PETSc, is there some possible incompatibility between PETSc and MPI calls? > > We are open to any other possible suggestions to try as other than valgrind on thousands of processes we seem to have run out of ideas. > > Thanks, Randy M. > >> On Apr 13, 2020, at 8:54 AM, Junchao Zhang > wrote: >> >> >> --Junchao Zhang >> >> >> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang > wrote: >> Randy, >> Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr >> Correct: I_MPI_ADJUST_ALLREDUCE=1 >> But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. >> Thanks. >> --Junchao Zhang >> >> >> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: >> Dear PETSc users, >> >> We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. >> This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. >> >> The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: >> >> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >> >> >> Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: >> >> >> /* Post the Isends with the message length-info */ >> for (i=0,j=0; i> if (ilengths[i]) { >> ierr = MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >> j++; >> } >> } >> >> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the same problem. >> >> We suspect there is some limit being set on this cloud cluster on the number of file connections or something, but we don?t know. >> >> Anyone have any ideas? We are sort of grasping for straws at this point. >> >> Thanks, Randy M. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: to_petsc-users.zip Type: application/zip Size: 6652 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 15 10:58:11 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 10:58:11 -0500 (CDT) Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL In-Reply-To: References: Message-ID: > Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --prefix=/Users/fire/opt/petsc313 --with-zlib --with-viennacl=1 --with-viennacl-dir=/Users/fire/opt/viennacl I guess you are running viennacl (opencl) on CPU. please try the attached patch. cd petsc patch -Np1 < viennacl.patch Or use branch balay/viennacl-cpu-check/maint in petsc repo Satish On Wed, 15 Apr 2020, huabel via petsc-users wrote: > Dear Users, > > I?m try to use petsc3.13 with ViennaCL , when I try to run src/vec/vec/tutorials/ex1.c, I get next error, thanks. > > dyld: Symbol not found: _MatCreate_MPIAIJViennaCL > Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > Expected in: flat namespace > in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > [1] 22602 abort ./ex1 > > ? tutorials git:(master) ? ./ex1 -vec_type viennacl -mat_type aijviennacl > dyld: Symbol not found: _MatCreate_MPIAIJViennaCL > Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > Expected in: flat namespace > in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > [1] 23268 abort ./ex1 -vec_type viennacl -mat_type aijviennacl > > > > Thanks > Abel Hu > > -------------- next part -------------- diff --git a/config/BuildSystem/config/packages/viennacl.py b/config/BuildSystem/config/packages/viennacl.py index aa614c7af8..9e3076cb50 100644 --- a/config/BuildSystem/config/packages/viennacl.py +++ b/config/BuildSystem/config/packages/viennacl.py @@ -41,10 +41,10 @@ class Configure(config.package.Package): shutil.copytree(srcdir,destdir) except RuntimeError as e: raise RuntimeError('Error installing ViennaCL include files: '+str(e)) + return self.installDir + def configureLibrary(self): + config.package.Package.configureLibrary(self) #check for CUDA: if not self.cuda.found: self.addDefine('HAVE_VIENNACL_NO_CUDA', 1) - - return self.installDir - From stefano.zampini at gmail.com Wed Apr 15 11:42:57 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 15 Apr 2020 19:42:57 +0300 Subject: [petsc-users] CUDA error In-Reply-To: References: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> Message-ID: Mark attached is the patch. I will open an MR in the next days if you confirm it is working for you The issue is that CUSPARSE does not have a way to compute the triangular factors, so we demand the computation of the factors to PETSc (CPU). These factors are then copied to the GPU. What was happening in the second step of SNES, was that the factors were never updated since the offloadmask was never updated. The real issue is that the CUSPARSE support in PETSc is really in bad shape and mostly untested, with coding solutions that are probably outdated now. I'll see what I can do to fix the class if I have time in the next weeks. Stefano Il giorno mer 15 apr 2020 alle ore 17:21 Mark Adams ha scritto: > > > On Wed, Apr 15, 2020 at 8:24 AM Stefano Zampini > wrote: > >> Mark >> >> I have fixed few things in the solver and it is tested with the current >> master. >> > > I rebased with master over the weekend .... > > >> Can you write a MWE to reproduce the issue? Which version of CUDA and >> CUSPARSE are you using? >> > > You can use mark/feature-xgc-interface-rebase branch and add '-mat_type > seqaijcusparse -fp_pc_factor_mat_solver_type cusparse > -mat_cusparse_storage_format ell -vec_type cuda' > to dm/impls/plex/tutorials/ex10.c > > The first stage, SNES solve, actually looks OK here. Maybe. > > Thanks, > > 10:01 mark/feature-xgc-interface-rebase *= ~/petsc$ make -f gmakefile test > search='dm_impls_plex_tutorials-ex10_0' > /usr/bin/python /ccs/home/adams/petsc/config/gmakegentest.py > --petsc-dir=/ccs/home/adams/petsc --petsc-arch=arch-summit-opt64-gnu-cuda > --testdir=./arch-summit-opt64-gnu-cuda/tests > Using MAKEFLAGS: search=dm_impls_plex_tutorials-ex10_0 > CC > arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10.o > CLINKER arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10 > TEST > arch-summit-opt64-gnu-cuda/tests/counts/dm_impls_plex_tutorials-ex10_0.counts > ok dm_impls_plex_tutorials-ex10_0 > not ok diff-dm_impls_plex_tutorials-ex10_0 # Error code: 1 > # 14,16c14,16 > # < 0 SNES Function norm 6.184233768573e-04 > # < 1 SNES Function norm 1.467479466750e-08 > # < 2 SNES Function norm 7.863111141350e-12 > # --- > # > 0 SNES Function norm 6.184233768572e-04 > # > 1 SNES Function norm 1.467479466739e-08 > # > 2 SNES Function norm 7.863102870090e-12 > # 18,31c18,256 > # < 0 SNES Function norm 6.182952107532e-04 > # < 1 SNES Function norm 7.336382211149e-09 > # < 2 SNES Function norm 1.566979901443e-11 > # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE > iterations 2 > # < 0 SNES Function norm 6.183592738545e-04 > # < 1 SNES Function norm 7.337681407420e-09 > # < 2 SNES Function norm 1.408823933908e-11 > # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE > iterations 2 > # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation > error 0.0396569, accepting step of size 1e-06 > # < 1 TS dt 1.25e-06 time 1e-06 > # < 1) species-0: charge density= -1.6024814608984e+01 z-momentum= > 2.0080682964364e-19 energy= 1.2018000284846e+05 > # < 1) species-1: charge density= 1.6021676653316e+01 z-momentum= > 1.4964483981137e-17 energy= 1.2017223215083e+05 > # < 1) species-2: charge density= 2.8838441139703e-03 z-momentum= > -1.1062018110807e-23 energy= 1.2019641370376e-03 > # < 1) Total: charge density= -2.5411155383649e-04, > momentum= 1.5165279748763e-17, energy= 2.4035223620125e+05 (m_i[0]/m_e = > 3670.94, 140 cells), 1 sub threads > # --- > # > 0 SNES Function norm 6.182952107531e-04 > # > 1 SNES Function norm 6.181600164904e-04 > # > 2 SNES Function norm 6.180249471739e-04 > # > 3 SNES Function norm 6.178899987549e-04 > > >> I was planning to reorganize the factor code in AIJCUSPARSE in the next >> days. >> >> kl-18967:petsc zampins$ git grep "solver_type cusparse" >> src/ksp/ksp/examples/tests/ex43.c: args: -f >> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >> cusparse* -mat_cusparse_storage_format ell -vec_type cuda -pc_type ilu >> src/ksp/ksp/examples/tests/ex43.c: args: -f >> ${DATAFILESPATH}/matrices/shallow_water1 -mat_type seqaijcusparse >> -pc_factor_mat_*solver_type cusparse* -mat_cusparse_storage_format hyb >> -vec_type cuda -ksp_type cg -pc_type icc >> src/ksp/ksp/examples/tests/ex43.c: args: -f >> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >> cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg >> -pc_type ilu >> src/ksp/ksp/examples/tests/ex43.c: args: -f >> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >> cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg >> -pc_type ilu -pc_factor_mat_ordering_type nd >> src/ksp/ksp/examples/tutorials/ex46.c: args: -dm_mat_type >> aijcusparse -dm_vec_type cuda -random_exact_sol -pc_type ilu -pc_factor_mat_*solver_type >> cusparse* >> src/ksp/ksp/examples/tutorials/ex59.c: args: -subdomain_mat_type >> aijcusparse -physical_pc_bddc_dirichlet_pc_factor_mat_*solver_type >> cusparse* >> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >> -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu >> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >> -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu >> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >> -vec_type cuda >> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >> -vec_type cuda >> src/ksp/ksp/examples/tutorials/ex71.c: args: -pde_type Poisson -cells >> 7,9,8 -dim 3 -ksp_view -pc_bddc_coarse_redundant_pc_type svd >> -ksp_error_if_not_converged -pc_bddc_dirichlet_pc_type cholesky >> -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* >> -pc_bddc_dirichlet_pc_factor_mat_ordering_type nd -pc_bddc_neumann_pc_type >> cholesky -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* >> -pc_bddc_neumann_pc_factor_mat_ordering_type nd -matis_localmat_type >> aijcusparse >> src/ksp/ksp/examples/tutorials/ex72.c: args: -f0 >> ${DATAFILESPATH}/matrices/medium -ksp_monitor_short -ksp_view -mat_view >> ascii::ascii_info -mat_type aijcusparse -pc_factor_mat_*solver_type >> cusparse* -pc_type ilu -vec_type cuda >> src/snes/examples/tutorials/ex12.c: args: -matis_localmat_type >> aijcusparse -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* >> -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* >> >> On Apr 15, 2020, at 2:20 PM, Mark Adams wrote: >> >> I tried using a serial direct solver in cusparse and got bad numerics: >> >> -vector_type cuda -mat_type aijcusparse -pc_factor_mat_solver_type >> cusparse >> >> Before I start debugging this I wanted to see if there are any known >> issues that I should be aware of. >> >> Thanks, >> >> >> -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: patch_for_mark Type: application/octet-stream Size: 3695 bytes Desc: not available URL: From hu.ds.abel at icloud.com Wed Apr 15 12:04:22 2020 From: hu.ds.abel at icloud.com (huabel) Date: Thu, 16 Apr 2020 01:04:22 +0800 Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL In-Reply-To: References: Message-ID: <2E1654F2-55C4-4F8C-A06E-EBE74758E386@icloud.com> Hi Satish, that patch is good, thank you! > On Apr 15, 2020, at 23:58, Satish Balay wrote: > >> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --prefix=/Users/fire/opt/petsc313 --with-zlib --with-viennacl=1 --with-viennacl-dir=/Users/fire/opt/viennacl > > I guess you are running viennacl (opencl) on CPU. > > please try the attached patch. > > cd petsc > patch -Np1 < viennacl.patch > > Or use branch balay/viennacl-cpu-check/maint in petsc repo > > Satish > > On Wed, 15 Apr 2020, huabel via petsc-users wrote: > >> Dear Users, >> >> I?m try to use petsc3.13 with ViennaCL , when I try to run src/vec/vec/tutorials/ex1.c, I get next error, thanks. >> >> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL >> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >> Expected in: flat namespace >> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >> [1] 22602 abort ./ex1 >> >> ? tutorials git:(master) ? ./ex1 -vec_type viennacl -mat_type aijviennacl >> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL >> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >> Expected in: flat namespace >> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >> [1] 23268 abort ./ex1 -vec_type viennacl -mat_type aijviennacl >> >> >> >> Thanks >> Abel Hu >> > From mfadams at lbl.gov Wed Apr 15 14:14:16 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 15:14:16 -0400 Subject: [petsc-users] CUDA error In-Reply-To: References: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> Message-ID: Thanks, it looks correct. I am getting memory leaks (appended) And something horrible is going on with performance: MatLUFactorNum 130 1.0 9.2220e+00 1.0 6.51e+08 1.0 0.0e+00 0.0e+00 0.0e+00 30 0 0 0 0 30 0 0 0 0 71 0 390 3.33e+02 0 0.00e+00 0 MatLUFactorNum 130 1.0 6.5177e-01 1.0 1.28e+09 1.0 0.0e+00 0.0e+00 0.0e+00 4 1 0 0 0 4 1 0 0 0 1966 0 0 0.00e+00 0 0.00e+00 0 This is not urgent, but I'd like to get a serial LU GPU solver at some point. Thanks again, Mark Lots of these: [ 0]32 bytes VecCUDAAllocateCheck() line 34 in /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ veccuda2.cu [ 0]32 bytes VecCUDAAllocateCheck() line 34 in /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ veccuda2.cu [ 0]32 bytes VecCUDAAllocateCheck() line 34 in /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ veccuda2.cu On Wed, Apr 15, 2020 at 12:47 PM Stefano Zampini wrote: > Mark > > attached is the patch. I will open an MR in the next days if you confirm > it is working for you > The issue is that CUSPARSE does not have a way to compute the triangular > factors, so we demand the computation of the factors to PETSc (CPU). These > factors are then copied to the GPU. > What was happening in the second step of SNES, was that the factors were > never updated since the offloadmask was never updated. > > The real issue is that the CUSPARSE support in PETSc is really in bad > shape and mostly untested, with coding solutions that are probably outdated > now. > I'll see what I can do to fix the class if I have time in the next weeks. > > Stefano > > Il giorno mer 15 apr 2020 alle ore 17:21 Mark Adams ha > scritto: > >> >> >> On Wed, Apr 15, 2020 at 8:24 AM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> Mark >>> >>> I have fixed few things in the solver and it is tested with the current >>> master. >>> >> >> I rebased with master over the weekend .... >> >> >>> Can you write a MWE to reproduce the issue? Which version of CUDA and >>> CUSPARSE are you using? >>> >> >> You can use mark/feature-xgc-interface-rebase branch and add '-mat_type >> seqaijcusparse -fp_pc_factor_mat_solver_type cusparse >> -mat_cusparse_storage_format ell -vec_type cuda' >> to dm/impls/plex/tutorials/ex10.c >> >> The first stage, SNES solve, actually looks OK here. Maybe. >> >> Thanks, >> >> 10:01 mark/feature-xgc-interface-rebase *= ~/petsc$ make -f gmakefile >> test search='dm_impls_plex_tutorials-ex10_0' >> /usr/bin/python /ccs/home/adams/petsc/config/gmakegentest.py >> --petsc-dir=/ccs/home/adams/petsc --petsc-arch=arch-summit-opt64-gnu-cuda >> --testdir=./arch-summit-opt64-gnu-cuda/tests >> Using MAKEFLAGS: search=dm_impls_plex_tutorials-ex10_0 >> CC >> arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10.o >> CLINKER arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10 >> TEST >> arch-summit-opt64-gnu-cuda/tests/counts/dm_impls_plex_tutorials-ex10_0.counts >> ok dm_impls_plex_tutorials-ex10_0 >> not ok diff-dm_impls_plex_tutorials-ex10_0 # Error code: 1 >> # 14,16c14,16 >> # < 0 SNES Function norm 6.184233768573e-04 >> # < 1 SNES Function norm 1.467479466750e-08 >> # < 2 SNES Function norm 7.863111141350e-12 >> # --- >> # > 0 SNES Function norm 6.184233768572e-04 >> # > 1 SNES Function norm 1.467479466739e-08 >> # > 2 SNES Function norm 7.863102870090e-12 >> # 18,31c18,256 >> # < 0 SNES Function norm 6.182952107532e-04 >> # < 1 SNES Function norm 7.336382211149e-09 >> # < 2 SNES Function norm 1.566979901443e-11 >> # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE >> iterations 2 >> # < 0 SNES Function norm 6.183592738545e-04 >> # < 1 SNES Function norm 7.337681407420e-09 >> # < 2 SNES Function norm 1.408823933908e-11 >> # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE >> iterations 2 >> # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation >> error 0.0396569, accepting step of size 1e-06 >> # < 1 TS dt 1.25e-06 time 1e-06 >> # < 1) species-0: charge density= -1.6024814608984e+01 >> z-momentum= 2.0080682964364e-19 energy= 1.2018000284846e+05 >> # < 1) species-1: charge density= 1.6021676653316e+01 >> z-momentum= 1.4964483981137e-17 energy= 1.2017223215083e+05 >> # < 1) species-2: charge density= 2.8838441139703e-03 >> z-momentum= -1.1062018110807e-23 energy= 1.2019641370376e-03 >> # < 1) Total: charge density= -2.5411155383649e-04, >> momentum= 1.5165279748763e-17, energy= 2.4035223620125e+05 (m_i[0]/m_e = >> 3670.94, 140 cells), 1 sub threads >> # --- >> # > 0 SNES Function norm 6.182952107531e-04 >> # > 1 SNES Function norm 6.181600164904e-04 >> # > 2 SNES Function norm 6.180249471739e-04 >> # > 3 SNES Function norm 6.178899987549e-04 >> >> >>> I was planning to reorganize the factor code in AIJCUSPARSE in the next >>> days. >>> >>> kl-18967:petsc zampins$ git grep "solver_type cusparse" >>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >>> cusparse* -mat_cusparse_storage_format ell -vec_type cuda -pc_type ilu >>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>> ${DATAFILESPATH}/matrices/shallow_water1 -mat_type seqaijcusparse >>> -pc_factor_mat_*solver_type cusparse* -mat_cusparse_storage_format hyb >>> -vec_type cuda -ksp_type cg -pc_type icc >>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >>> cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type >>> bicg -pc_type ilu >>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >>> cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type >>> bicg -pc_type ilu -pc_factor_mat_ordering_type nd >>> src/ksp/ksp/examples/tutorials/ex46.c: args: -dm_mat_type >>> aijcusparse -dm_vec_type cuda -random_exact_sol -pc_type ilu -pc_factor_mat_*solver_type >>> cusparse* >>> src/ksp/ksp/examples/tutorials/ex59.c: args: -subdomain_mat_type >>> aijcusparse -physical_pc_bddc_dirichlet_pc_factor_mat_*solver_type >>> cusparse* >>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>> -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu >>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>> -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu >>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>> -vec_type cuda >>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>> -vec_type cuda >>> src/ksp/ksp/examples/tutorials/ex71.c: args: -pde_type Poisson -cells >>> 7,9,8 -dim 3 -ksp_view -pc_bddc_coarse_redundant_pc_type svd >>> -ksp_error_if_not_converged -pc_bddc_dirichlet_pc_type cholesky >>> -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* >>> -pc_bddc_dirichlet_pc_factor_mat_ordering_type nd -pc_bddc_neumann_pc_type >>> cholesky -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* >>> -pc_bddc_neumann_pc_factor_mat_ordering_type nd -matis_localmat_type >>> aijcusparse >>> src/ksp/ksp/examples/tutorials/ex72.c: args: -f0 >>> ${DATAFILESPATH}/matrices/medium -ksp_monitor_short -ksp_view -mat_view >>> ascii::ascii_info -mat_type aijcusparse -pc_factor_mat_*solver_type >>> cusparse* -pc_type ilu -vec_type cuda >>> src/snes/examples/tutorials/ex12.c: args: -matis_localmat_type >>> aijcusparse -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* >>> -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* >>> >>> On Apr 15, 2020, at 2:20 PM, Mark Adams wrote: >>> >>> I tried using a serial direct solver in cusparse and got bad numerics: >>> >>> -vector_type cuda -mat_type aijcusparse -pc_factor_mat_solver_type >>> cusparse >>> >>> Before I start debugging this I wanted to see if there are any known >>> issues that I should be aware of. >>> >>> Thanks, >>> >>> >>> > > -- > Stefano > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Apr 15 14:18:24 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 15 Apr 2020 22:18:24 +0300 Subject: [petsc-users] CUDA error In-Reply-To: References: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> Message-ID: <1A17CE7C-F799-4A84-9068-8B6598DF8D5B@gmail.com> > On Apr 15, 2020, at 10:14 PM, Mark Adams wrote: > > Thanks, it looks correct. I am getting memory leaks (appended) > > And something horrible is going on with performance: > > MatLUFactorNum 130 1.0 9.2220e+00 1.0 6.51e+08 1.0 0.0e+00 0.0e+00 0.0e+00 30 0 0 0 0 30 0 0 0 0 71 0 390 3.33e+02 0 0.00e+00 0 > > MatLUFactorNum 130 1.0 6.5177e-01 1.0 1.28e+09 1.0 0.0e+00 0.0e+00 0.0e+00 4 1 0 0 0 4 1 0 0 0 1966 0 0 0.00e+00 0 0.00e+00 0 > Can you describe these numbers? It seems that in the second case the factorization is run on the CPU (as I explained in my previous message) > This is not urgent, but I'd like to get a serial LU GPU solver at some point. > > Thanks again, > Mark > > Lots of these: > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/veccuda2.cu > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/veccuda2.cu > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/veccuda2.cu > Yes, as I said, the code is in bad shape. I?ll see what I can do. > On Wed, Apr 15, 2020 at 12:47 PM Stefano Zampini > wrote: > Mark > > attached is the patch. I will open an MR in the next days if you confirm it is working for you > The issue is that CUSPARSE does not have a way to compute the triangular factors, so we demand the computation of the factors to PETSc (CPU). These factors are then copied to the GPU. > What was happening in the second step of SNES, was that the factors were never updated since the offloadmask was never updated. > > The real issue is that the CUSPARSE support in PETSc is really in bad shape and mostly untested, with coding solutions that are probably outdated now. > I'll see what I can do to fix the class if I have time in the next weeks. > > Stefano > > Il giorno mer 15 apr 2020 alle ore 17:21 Mark Adams > ha scritto: > > > On Wed, Apr 15, 2020 at 8:24 AM Stefano Zampini > wrote: > Mark > > I have fixed few things in the solver and it is tested with the current master. > > I rebased with master over the weekend .... > > Can you write a MWE to reproduce the issue? Which version of CUDA and CUSPARSE are you using? > > You can use mark/feature-xgc-interface-rebase branch and add '-mat_type seqaijcusparse -fp_pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format ell -vec_type cuda' to dm/impls/plex/tutorials/ex10.c > > The first stage, SNES solve, actually looks OK here. Maybe. > > Thanks, > > 10:01 mark/feature-xgc-interface-rebase *= ~/petsc$ make -f gmakefile test search='dm_impls_plex_tutorials-ex10_0' > /usr/bin/python /ccs/home/adams/petsc/config/gmakegentest.py --petsc-dir=/ccs/home/adams/petsc --petsc-arch=arch-summit-opt64-gnu-cuda --testdir=./arch-summit-opt64-gnu-cuda/tests > Using MAKEFLAGS: search=dm_impls_plex_tutorials-ex10_0 > CC arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10.o > CLINKER arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10 > TEST arch-summit-opt64-gnu-cuda/tests/counts/dm_impls_plex_tutorials-ex10_0.counts > ok dm_impls_plex_tutorials-ex10_0 > not ok diff-dm_impls_plex_tutorials-ex10_0 # Error code: 1 > # 14,16c14,16 > # < 0 SNES Function norm 6.184233768573e-04 > # < 1 SNES Function norm 1.467479466750e-08 > # < 2 SNES Function norm 7.863111141350e-12 > # --- > # > 0 SNES Function norm 6.184233768572e-04 > # > 1 SNES Function norm 1.467479466739e-08 > # > 2 SNES Function norm 7.863102870090e-12 > # 18,31c18,256 > # < 0 SNES Function norm 6.182952107532e-04 > # < 1 SNES Function norm 7.336382211149e-09 > # < 2 SNES Function norm 1.566979901443e-11 > # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 > # < 0 SNES Function norm 6.183592738545e-04 > # < 1 SNES Function norm 7.337681407420e-09 > # < 2 SNES Function norm 1.408823933908e-11 > # < Nonlinear fp_ solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 > # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 0.0396569, accepting step of size 1e-06 > # < 1 TS dt 1.25e-06 time 1e-06 > # < 1) species-0: charge density= -1.6024814608984e+01 z-momentum= 2.0080682964364e-19 energy= 1.2018000284846e+05 > # < 1) species-1: charge density= 1.6021676653316e+01 z-momentum= 1.4964483981137e-17 energy= 1.2017223215083e+05 > # < 1) species-2: charge density= 2.8838441139703e-03 z-momentum= -1.1062018110807e-23 energy= 1.2019641370376e-03 > # < 1) Total: charge density= -2.5411155383649e-04, momentum= 1.5165279748763e-17, energy= 2.4035223620125e+05 (m_i[0]/m_e = 3670.94, 140 cells), 1 sub threads > # --- > # > 0 SNES Function norm 6.182952107531e-04 > # > 1 SNES Function norm 6.181600164904e-04 > # > 2 SNES Function norm 6.180249471739e-04 > # > 3 SNES Function norm 6.178899987549e-04 > > I was planning to reorganize the factor code in AIJCUSPARSE in the next days. > > kl-18967:petsc zampins$ git grep "solver_type cusparse" > src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format ell -vec_type cuda -pc_type ilu > src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/shallow_water1 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format hyb -vec_type cuda -ksp_type cg -pc_type icc > src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg -pc_type ilu > src/ksp/ksp/examples/tests/ex43.c: args: -f ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_solver_type cusparse -mat_cusparse_storage_format csr -vec_type cuda -ksp_type bicg -pc_type ilu -pc_factor_mat_ordering_type nd > src/ksp/ksp/examples/tutorials/ex46.c: args: -dm_mat_type aijcusparse -dm_vec_type cuda -random_exact_sol -pc_type ilu -pc_factor_mat_solver_type cusparse > src/ksp/ksp/examples/tutorials/ex59.c: args: -subdomain_mat_type aijcusparse -physical_pc_bddc_dirichlet_pc_factor_mat_solver_type cusparse > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda > src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short -mat_type aijcusparse -sub_pc_factor_mat_solver_type cusparse -vec_type cuda > src/ksp/ksp/examples/tutorials/ex71.c: args: -pde_type Poisson -cells 7,9,8 -dim 3 -ksp_view -pc_bddc_coarse_redundant_pc_type svd -ksp_error_if_not_converged -pc_bddc_dirichlet_pc_type cholesky -pc_bddc_dirichlet_pc_factor_mat_solver_type cusparse -pc_bddc_dirichlet_pc_factor_mat_ordering_type nd -pc_bddc_neumann_pc_type cholesky -pc_bddc_neumann_pc_factor_mat_solver_type cusparse -pc_bddc_neumann_pc_factor_mat_ordering_type nd -matis_localmat_type aijcusparse > src/ksp/ksp/examples/tutorials/ex72.c: args: -f0 ${DATAFILESPATH}/matrices/medium -ksp_monitor_short -ksp_view -mat_view ascii::ascii_info -mat_type aijcusparse -pc_factor_mat_solver_type cusparse -pc_type ilu -vec_type cuda > src/snes/examples/tutorials/ex12.c: args: -matis_localmat_type aijcusparse -pc_bddc_dirichlet_pc_factor_mat_solver_type cusparse -pc_bddc_neumann_pc_factor_mat_solver_type cusparse > >> On Apr 15, 2020, at 2:20 PM, Mark Adams > wrote: >> >> I tried using a serial direct solver in cusparse and got bad numerics: >> >> -vector_type cuda -mat_type aijcusparse -pc_factor_mat_solver_type cusparse >> >> Before I start debugging this I wanted to see if there are any known issues that I should be aware of. >> >> Thanks, > > > > -- > Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Apr 15 15:33:47 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 16:33:47 -0400 Subject: [petsc-users] Fwd: petsc on Cori Haswell In-Reply-To: References: Message-ID: We have a problem when going from 32K to 64K cores on Cori-haswell. Does Anyone have any thoughts? Thanks, Mark ---------- Forwarded message --------- From: David Trebotich Date: Wed, Apr 15, 2020 at 4:20 PM Subject: Re: petsc on Cori Haswell To: Mark Adams Hey Mark- I am running into some issues that I am convinced are from the PETSc build. I am able to build and run on up to 32K cores. At 64K I start getting stuff like below (looks like two issues: pmi stuff and MPI_Init). I have been working with Brian Freisen to see if it's a NERSC problem. At this point I build without PETSc and then run native gmg in Chombo and have no problems. The problems only come with building with PETSc, and at larger concurrencies. The only thing that has changed is that this is a new PETSc installation. Perhaps something changed in the PETSc version you built from previously? Thanks for the help. Treb Mon Apr 13 17:49:45 2020: [PE_101955]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_init:_pmi_mmap_init returned -1 Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_init:_pmi_mmap_init returned -1 Mon Apr 13 17:49:45 2020: [PE_82712]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=28, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_17868]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_97918]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_init:_pmi_mmap_init returned -1 Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_init:_pmi_mmap_init returned -1 Mon Apr 13 17:49:45 2020: [PE_110563]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_27899]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=38, pes_this_node=64, timeout=180 secs [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(537): MPID_Init(246).......: channel initialization failed MPID_Init(647).......: PMI2 init failed: 1 Attempting to use an MPI routine before initializing MPICH [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(537): MPID_Init(246).......: channel initialization failed MPID_Init(647).......: PMI2 init failed: 1 Attempting to use an MPI routine before initializing MPICH Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_init:_pmi_mmap_init returned -1 Mon Apr 13 17:49:45 2020: [PE_71962]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_64329]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_init:_pmi_mmap_init returned -1 [Mon Apr 13 17:49:45 2020] [c6-1c2s5n2] Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(537): MPID_Init(246).......: channel initialization failed MPID_Init(647).......: PMI2 init failed: 1 Attempting to use an MPI routine before initializing MPICH [Mon Apr 13 17:49:45 2020] [c9-4c2s13n2] Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(537): MPID_Init(246).......: channel initialization failed MPID_Init(647).......: PMI2 init failed: 1 Attempting to use an MPI routine before initializing MPICH Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_mmap_tmp: Warning bootstrap barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_init:_pmi_mmap_init returned -1 [Mon Apr 13 17:49:45 2020] [c6-3c2s9n1] Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(537): MPID_Init(246).......: channel initialization failed -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Apr 15 15:41:07 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 16:41:07 -0400 Subject: [petsc-users] petsc on Cori Haswell In-Reply-To: References: Message-ID: Whoops, this is actually Cori-KNL. On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote: > We have a problem when going from 32K to 64K cores on Cori-haswell. > Does Anyone have any thoughts? > Thanks, > Mark > > ---------- Forwarded message --------- > From: David Trebotich > Date: Wed, Apr 15, 2020 at 4:20 PM > Subject: Re: petsc on Cori Haswell > To: Mark Adams > > > Hey Mark- > I am running into some issues that I am convinced are from the PETSc > build. I am able to build and run on up to 32K cores. At 64K I start > getting stuff like below (looks like two issues: pmi stuff and MPI_Init). I > have been working with Brian Freisen to see if it's a NERSC problem. At > this point I build without PETSc and then run native gmg in Chombo and have > no problems. The problems only come with building with PETSc, and at larger > concurrencies. The only thing that has changed is that this is a new PETSc > installation. Perhaps something changed in the PETSc version you built from > previously? Thanks for the help. > Treb > > Mon Apr 13 17:49:45 2020: [PE_101955]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_init:_pmi_mmap_init returned -1 > Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_init:_pmi_mmap_init returned -1 > Mon Apr 13 17:49:45 2020: [PE_82712]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=28, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_17868]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_97918]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_init:_pmi_mmap_init returned -1 > Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_init:_pmi_mmap_init returned -1 > Mon Apr 13 17:49:45 2020: [PE_110563]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_27899]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=38, pes_this_node=64, timeout=180 secs > [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other MPI > error, error stack: > MPIR_Init_thread(537): > MPID_Init(246).......: channel initialization failed > MPID_Init(647).......: PMI2 init failed: 1 > Attempting to use an MPI routine before initializing MPICH > [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other MPI > error, error stack: > MPIR_Init_thread(537): > MPID_Init(246).......: channel initialization failed > MPID_Init(647).......: PMI2 init failed: 1 > Attempting to use an MPI routine before initializing MPICH > Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_init:_pmi_mmap_init returned -1 > Mon Apr 13 17:49:45 2020: [PE_71962]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_64329]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_init:_pmi_mmap_init returned -1 > [Mon Apr 13 17:49:45 2020] [c6-1c2s5n2] Fatal error in MPI_Init: Other MPI > error, error stack: > MPIR_Init_thread(537): > MPID_Init(246).......: channel initialization failed > MPID_Init(647).......: PMI2 init failed: 1 > Attempting to use an MPI routine before initializing MPICH > [Mon Apr 13 17:49:45 2020] [c9-4c2s13n2] Fatal error in MPI_Init: Other > MPI error, error stack: > MPIR_Init_thread(537): > MPID_Init(246).......: channel initialization failed > MPID_Init(647).......: PMI2 init failed: 1 > Attempting to use an MPI routine before initializing MPICH > Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_mmap_tmp: Warning bootstrap > barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs > Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_init:_pmi_mmap_init returned -1 > [Mon Apr 13 17:49:45 2020] [c6-3c2s9n1] Fatal error in MPI_Init: Other MPI > error, error stack: > MPIR_Init_thread(537): > MPID_Init(246).......: channel initialization failed > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Apr 15 15:47:33 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 16:47:33 -0400 Subject: [petsc-users] SuperLU + GPUs Message-ID: How does one use SuperLU with GPUs. I don't seem to get any GPU performance data so I assume GPUs are not getting turned on. Am I wrong about that? I configure with: configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp" --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC -fopenmp" --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis --download-superlu --download-superlu_dist --with-make-np=16 --download-parmetis --download-triangle --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 --with-64-bit-indices=0 --with-debugging=0 PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 --with-threadsaftey=1 --with-log=1 Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Apr 15 15:54:40 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 16:54:40 -0400 Subject: [petsc-users] petsc on Cori Haswell In-Reply-To: References: Message-ID: Also, can anyone recommend a highly scalable test problem that Treb can run with 64K cores? Thanks, On Wed, Apr 15, 2020 at 4:41 PM Mark Adams wrote: > Whoops, this is actually Cori-KNL. > > On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote: > >> We have a problem when going from 32K to 64K cores on Cori-haswell. >> Does Anyone have any thoughts? >> Thanks, >> Mark >> >> ---------- Forwarded message --------- >> From: David Trebotich >> Date: Wed, Apr 15, 2020 at 4:20 PM >> Subject: Re: petsc on Cori Haswell >> To: Mark Adams >> >> >> Hey Mark- >> I am running into some issues that I am convinced are from the PETSc >> build. I am able to build and run on up to 32K cores. At 64K I start >> getting stuff like below (looks like two issues: pmi stuff and MPI_Init). I >> have been working with Brian Freisen to see if it's a NERSC problem. At >> this point I build without PETSc and then run native gmg in Chombo and have >> no problems. The problems only come with building with PETSc, and at larger >> concurrencies. The only thing that has changed is that this is a new PETSc >> installation. Perhaps something changed in the PETSc version you built from >> previously? Thanks for the help. >> Treb >> >> Mon Apr 13 17:49:45 2020: [PE_101955]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_82712]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=28, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_17868]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_97918]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_110563]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_27899]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=38, pes_this_node=64, timeout=180 secs >> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_71962]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_64329]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_init:_pmi_mmap_init returned -1 >> [Mon Apr 13 17:49:45 2020] [c6-1c2s5n2] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> [Mon Apr 13 17:49:45 2020] [c9-4c2s13n2] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_init:_pmi_mmap_init returned -1 >> [Mon Apr 13 17:49:45 2020] [c6-3c2s9n1] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Apr 15 16:08:48 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 15 Apr 2020 16:08:48 -0500 Subject: [petsc-users] petsc on Cori Haswell In-Reply-To: References: Message-ID: Was there a petsc error stack? --Junchao Zhang On Wed, Apr 15, 2020 at 3:41 PM Mark Adams wrote: > Whoops, this is actually Cori-KNL. > > On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote: > >> We have a problem when going from 32K to 64K cores on Cori-haswell. >> Does Anyone have any thoughts? >> Thanks, >> Mark >> >> ---------- Forwarded message --------- >> From: David Trebotich >> Date: Wed, Apr 15, 2020 at 4:20 PM >> Subject: Re: petsc on Cori Haswell >> To: Mark Adams >> >> >> Hey Mark- >> I am running into some issues that I am convinced are from the PETSc >> build. I am able to build and run on up to 32K cores. At 64K I start >> getting stuff like below (looks like two issues: pmi stuff and MPI_Init). I >> have been working with Brian Freisen to see if it's a NERSC problem. At >> this point I build without PETSc and then run native gmg in Chombo and have >> no problems. The problems only come with building with PETSc, and at larger >> concurrencies. The only thing that has changed is that this is a new PETSc >> installation. Perhaps something changed in the PETSc version you built from >> previously? Thanks for the help. >> Treb >> >> Mon Apr 13 17:49:45 2020: [PE_101955]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_82712]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=28, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_17868]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_97918]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_110563]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_27899]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=38, pes_this_node=64, timeout=180 secs >> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_init:_pmi_mmap_init returned -1 >> Mon Apr 13 17:49:45 2020: [PE_71962]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_64329]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_init:_pmi_mmap_init returned -1 >> [Mon Apr 13 17:49:45 2020] [c6-1c2s5n2] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> [Mon Apr 13 17:49:45 2020] [c9-4c2s13n2] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> MPID_Init(647).......: PMI2 init failed: 1 >> Attempting to use an MPI routine before initializing MPICH >> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_mmap_tmp: Warning bootstrap >> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_init:_pmi_mmap_init returned -1 >> [Mon Apr 13 17:49:45 2020] [c6-3c2s9n1] Fatal error in MPI_Init: Other >> MPI error, error stack: >> MPIR_Init_thread(537): >> MPID_Init(246).......: channel initialization failed >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Apr 15 16:11:48 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 15 Apr 2020 16:11:48 -0500 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: I remember Barry said superlu gpu support is broken. --Junchao Zhang On Wed, Apr 15, 2020 at 3:47 PM Mark Adams wrote: > How does one use SuperLU with GPUs. I don't seem to get any GPU > performance data so I assume GPUs are not getting turned on. Am I wrong > about that? > > I configure with: > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp" > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC -fopenmp" > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc > --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis > --download-superlu --download-superlu_dist --with-make-np=16 > --download-parmetis --download-triangle > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 > --with-64-bit-indices=0 --with-debugging=0 > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > --with-threadsaftey=1 --with-log=1 > > Thanks, > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 15 16:12:52 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Apr 2020 17:12:52 -0400 Subject: [petsc-users] petsc on Cori Haswell In-Reply-To: References: Message-ID: On Wed, Apr 15, 2020 at 5:10 PM Junchao Zhang wrote: > Was there a petsc error stack? > 1) SNES ex5 is a highly scalable problem. Just give it large enough m and n. 2) Junchao, it looks like MPI_Init() is failing, which I believe comes before we install our signal handler to get us the stack. Thanks, Matt > --Junchao Zhang > > > On Wed, Apr 15, 2020 at 3:41 PM Mark Adams wrote: > >> Whoops, this is actually Cori-KNL. >> >> On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote: >> >>> We have a problem when going from 32K to 64K cores on Cori-haswell. >>> Does Anyone have any thoughts? >>> Thanks, >>> Mark >>> >>> ---------- Forwarded message --------- >>> From: David Trebotich >>> Date: Wed, Apr 15, 2020 at 4:20 PM >>> Subject: Re: petsc on Cori Haswell >>> To: Mark Adams >>> >>> >>> Hey Mark- >>> I am running into some issues that I am convinced are from the PETSc >>> build. I am able to build and run on up to 32K cores. At 64K I start >>> getting stuff like below (looks like two issues: pmi stuff and MPI_Init). I >>> have been working with Brian Freisen to see if it's a NERSC problem. At >>> this point I build without PETSc and then run native gmg in Chombo and have >>> no problems. The problems only come with building with PETSc, and at larger >>> concurrencies. The only thing that has changed is that this is a new PETSc >>> installation. Perhaps something changed in the PETSc version you built from >>> previously? Thanks for the help. >>> Treb >>> >>> Mon Apr 13 17:49:45 2020: [PE_101955]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_init:_pmi_mmap_init returned >>> -1 >>> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_init:_pmi_mmap_init returned >>> -1 >>> Mon Apr 13 17:49:45 2020: [PE_82712]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=28, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_17868]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_97918]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_init:_pmi_mmap_init returned -1 >>> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_init:_pmi_mmap_init returned >>> -1 >>> Mon Apr 13 17:49:45 2020: [PE_110563]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_27899]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=38, pes_this_node=64, timeout=180 secs >>> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >>> MPI error, error stack: >>> MPIR_Init_thread(537): >>> MPID_Init(246).......: channel initialization failed >>> MPID_Init(647).......: PMI2 init failed: 1 >>> Attempting to use an MPI routine before initializing MPICH >>> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >>> MPI error, error stack: >>> MPIR_Init_thread(537): >>> MPID_Init(246).......: channel initialization failed >>> MPID_Init(647).......: PMI2 init failed: 1 >>> Attempting to use an MPI routine before initializing MPICH >>> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_init:_pmi_mmap_init returned -1 >>> Mon Apr 13 17:49:45 2020: [PE_71962]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_64329]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_init:_pmi_mmap_init returned -1 >>> [Mon Apr 13 17:49:45 2020] [c6-1c2s5n2] Fatal error in MPI_Init: Other >>> MPI error, error stack: >>> MPIR_Init_thread(537): >>> MPID_Init(246).......: channel initialization failed >>> MPID_Init(647).......: PMI2 init failed: 1 >>> Attempting to use an MPI routine before initializing MPICH >>> [Mon Apr 13 17:49:45 2020] [c9-4c2s13n2] Fatal error in MPI_Init: Other >>> MPI error, error stack: >>> MPIR_Init_thread(537): >>> MPID_Init(246).......: channel initialization failed >>> MPID_Init(647).......: PMI2 init failed: 1 >>> Attempting to use an MPI routine before initializing MPICH >>> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_mmap_tmp: Warning bootstrap >>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_init:_pmi_mmap_init returned -1 >>> [Mon Apr 13 17:49:45 2020] [c6-3c2s9n1] Fatal error in MPI_Init: Other >>> MPI error, error stack: >>> MPIR_Init_thread(537): >>> MPID_Init(246).......: channel initialization failed >>> >>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 15 16:17:43 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 16:17:43 -0500 (CDT) Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: The build should work. It should give some verbose info [at runtime] regarding GPUs - from the following code. >>>>> SRC/cublas_utils.c >>>>>>>>>>> void DisplayHeader() { const int kb = 1024; const int mb = kb * kb; // cout << "NBody.GPU" << endl << "=========" << endl << endl; printf("CUDA version: v %d\n",CUDART_VERSION); //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." << THRUST_MINOR_VERSION << endl << endl; int devCount; cudaGetDeviceCount(&devCount); printf( "CUDA Devices: \n \n"); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Satish On Wed, 15 Apr 2020, Junchao Zhang wrote: > I remember Barry said superlu gpu support is broken. > --Junchao Zhang > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams wrote: > > > How does one use SuperLU with GPUs. I don't seem to get any GPU > > performance data so I assume GPUs are not getting turned on. Am I wrong > > about that? > > > > I configure with: > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp" > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC -fopenmp" > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc > > --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis > > --download-superlu --download-superlu_dist --with-make-np=16 > > --download-parmetis --download-triangle > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 > > --with-64-bit-indices=0 --with-debugging=0 > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > > --with-threadsaftey=1 --with-log=1 > > > > Thanks, > > Mark > > > From junchao.zhang at gmail.com Wed Apr 15 16:21:37 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 15 Apr 2020 16:21:37 -0500 Subject: [petsc-users] petsc on Cori Haswell In-Reply-To: References: Message-ID: I want to know who called MPI_Init(). Petsc or Chombo? --Junchao Zhang On Wed, Apr 15, 2020 at 4:13 PM Matthew Knepley wrote: > On Wed, Apr 15, 2020 at 5:10 PM Junchao Zhang > wrote: > >> Was there a petsc error stack? >> > > 1) SNES ex5 is a highly scalable problem. Just give it large enough m and > n. > > 2) Junchao, it looks like MPI_Init() is failing, which I believe comes > before we install our signal handler to get us the stack. > > Thanks, > > Matt > > >> --Junchao Zhang >> >> >> On Wed, Apr 15, 2020 at 3:41 PM Mark Adams wrote: >> >>> Whoops, this is actually Cori-KNL. >>> >>> On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote: >>> >>>> We have a problem when going from 32K to 64K cores on Cori-haswell. >>>> Does Anyone have any thoughts? >>>> Thanks, >>>> Mark >>>> >>>> ---------- Forwarded message --------- >>>> From: David Trebotich >>>> Date: Wed, Apr 15, 2020 at 4:20 PM >>>> Subject: Re: petsc on Cori Haswell >>>> To: Mark Adams >>>> >>>> >>>> Hey Mark- >>>> I am running into some issues that I am convinced are from the PETSc >>>> build. I am able to build and run on up to 32K cores. At 64K I start >>>> getting stuff like below (looks like two issues: pmi stuff and MPI_Init). I >>>> have been working with Brian Freisen to see if it's a NERSC problem. At >>>> this point I build without PETSc and then run native gmg in Chombo and have >>>> no problems. The problems only come with building with PETSc, and at larger >>>> concurrencies. The only thing that has changed is that this is a new PETSc >>>> installation. Perhaps something changed in the PETSc version you built from >>>> previously? Thanks for the help. >>>> Treb >>>> >>>> Mon Apr 13 17:49:45 2020: [PE_101955]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_init:_pmi_mmap_init returned >>>> -1 >>>> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_init:_pmi_mmap_init returned >>>> -1 >>>> Mon Apr 13 17:49:45 2020: [PE_82712]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=28, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_17868]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_97918]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_init:_pmi_mmap_init returned >>>> -1 >>>> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_init:_pmi_mmap_init returned >>>> -1 >>>> Mon Apr 13 17:49:45 2020: [PE_110563]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_27899]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=38, pes_this_node=64, timeout=180 secs >>>> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >>>> MPI error, error stack: >>>> MPIR_Init_thread(537): >>>> MPID_Init(246).......: channel initialization failed >>>> MPID_Init(647).......: PMI2 init failed: 1 >>>> Attempting to use an MPI routine before initializing MPICH >>>> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >>>> MPI error, error stack: >>>> MPIR_Init_thread(537): >>>> MPID_Init(246).......: channel initialization failed >>>> MPID_Init(647).......: PMI2 init failed: 1 >>>> Attempting to use an MPI routine before initializing MPICH >>>> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_init:_pmi_mmap_init returned >>>> -1 >>>> Mon Apr 13 17:49:45 2020: [PE_71962]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_64329]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_init:_pmi_mmap_init returned >>>> -1 >>>> [Mon Apr 13 17:49:45 2020] [c6-1c2s5n2] Fatal error in MPI_Init: Other >>>> MPI error, error stack: >>>> MPIR_Init_thread(537): >>>> MPID_Init(246).......: channel initialization failed >>>> MPID_Init(647).......: PMI2 init failed: 1 >>>> Attempting to use an MPI routine before initializing MPICH >>>> [Mon Apr 13 17:49:45 2020] [c9-4c2s13n2] Fatal error in MPI_Init: Other >>>> MPI error, error stack: >>>> MPIR_Init_thread(537): >>>> MPID_Init(246).......: channel initialization failed >>>> MPID_Init(647).......: PMI2 init failed: 1 >>>> Attempting to use an MPI routine before initializing MPICH >>>> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_mmap_tmp: Warning bootstrap >>>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>>> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_init:_pmi_mmap_init returned >>>> -1 >>>> [Mon Apr 13 17:49:45 2020] [c6-3c2s9n1] Fatal error in MPI_Init: Other >>>> MPI error, error stack: >>>> MPIR_Init_thread(537): >>>> MPID_Init(246).......: channel initialization failed >>>> >>>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dptrebotich at lbl.gov Wed Apr 15 16:26:51 2020 From: dptrebotich at lbl.gov (David Trebotich) Date: Wed, 15 Apr 2020 14:26:51 -0700 Subject: [petsc-users] petsc on Cori Haswell In-Reply-To: References: Message-ID: Matt is correct on his point 2. And I'll get fresh output to send your way. Stay tuned. On Wed, Apr 15, 2020, 2:21 PM Junchao Zhang wrote: > I want to know who called MPI_Init(). Petsc or Chombo? > --Junchao Zhang > > > On Wed, Apr 15, 2020 at 4:13 PM Matthew Knepley wrote: > >> On Wed, Apr 15, 2020 at 5:10 PM Junchao Zhang >> wrote: >> >>> Was there a petsc error stack? >>> >> >> 1) SNES ex5 is a highly scalable problem. Just give it large enough m >> and n. >> >> 2) Junchao, it looks like MPI_Init() is failing, which I believe comes >> before we install our signal handler to get us the stack. >> >> Thanks, >> >> Matt >> >> >>> --Junchao Zhang >>> >>> >>> On Wed, Apr 15, 2020 at 3:41 PM Mark Adams wrote: >>> >>>> Whoops, this is actually Cori-KNL. >>>> >>>> On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote: >>>> >>>>> We have a problem when going from 32K to 64K cores on Cori-haswell. >>>>> Does Anyone have any thoughts? >>>>> Thanks, >>>>> Mark >>>>> >>>>> ---------- Forwarded message --------- >>>>> From: David Trebotich >>>>> Date: Wed, Apr 15, 2020 at 4:20 PM >>>>> Subject: Re: petsc on Cori Haswell >>>>> To: Mark Adams >>>>> >>>>> >>>>> Hey Mark- >>>>> I am running into some issues that I am convinced are from the PETSc >>>>> build. I am able to build and run on up to 32K cores. At 64K I start >>>>> getting stuff like below (looks like two issues: pmi stuff and MPI_Init). I >>>>> have been working with Brian Freisen to see if it's a NERSC problem. At >>>>> this point I build without PETSc and then run native gmg in Chombo and have >>>>> no problems. The problems only come with building with PETSc, and at larger >>>>> concurrencies. The only thing that has changed is that this is a new PETSc >>>>> installation. Perhaps something changed in the PETSc version you built from >>>>> previously? Thanks for the help. >>>>> Treb >>>>> >>>>> Mon Apr 13 17:49:45 2020: [PE_101955]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_101958]:_pmi_init:_pmi_mmap_init >>>>> returned -1 >>>>> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_101979]:_pmi_init:_pmi_mmap_init >>>>> returned -1 >>>>> Mon Apr 13 17:49:45 2020: [PE_82712]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=28, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_17868]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_97918]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=33, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_17869]:_pmi_init:_pmi_mmap_init returned >>>>> -1 >>>>> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_110562]:_pmi_init:_pmi_mmap_init >>>>> returned -1 >>>>> Mon Apr 13 17:49:45 2020: [PE_110563]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=27, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_27899]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=38, pes_this_node=64, timeout=180 secs >>>>> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >>>>> MPI error, error stack: >>>>> MPIR_Init_thread(537): >>>>> MPID_Init(246).......: channel initialization failed >>>>> MPID_Init(647).......: PMI2 init failed: 1 >>>>> Attempting to use an MPI routine before initializing MPICH >>>>> [Mon Apr 13 17:49:45 2020] [c7-4c1s6n0] Fatal error in MPI_Init: Other >>>>> MPI error, error stack: >>>>> MPIR_Init_thread(537): >>>>> MPID_Init(246).......: channel initialization failed >>>>> MPID_Init(647).......: PMI2 init failed: 1 >>>>> Attempting to use an MPI routine before initializing MPICH >>>>> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_71961]:_pmi_init:_pmi_mmap_init returned >>>>> -1 >>>>> Mon Apr 13 17:49:45 2020: [PE_71962]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_64329]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=32, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_64335]:_pmi_init:_pmi_mmap_init returned >>>>> -1 >>>>> [Mon Apr 13 17:49:45 2020] [c6-1c2s5n2] Fatal error in MPI_Init: Other >>>>> MPI error, error stack: >>>>> MPIR_Init_thread(537): >>>>> MPID_Init(246).......: channel initialization failed >>>>> MPID_Init(647).......: PMI2 init failed: 1 >>>>> Attempting to use an MPI routine before initializing MPICH >>>>> [Mon Apr 13 17:49:45 2020] [c9-4c2s13n2] Fatal error in MPI_Init: >>>>> Other MPI error, error stack: >>>>> MPIR_Init_thread(537): >>>>> MPID_Init(246).......: channel initialization failed >>>>> MPID_Init(647).......: PMI2 init failed: 1 >>>>> Attempting to use an MPI routine before initializing MPICH >>>>> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_mmap_tmp: Warning bootstrap >>>>> barrier failed: num_syncd=35, pes_this_node=64, timeout=180 secs >>>>> Mon Apr 13 17:49:45 2020: [PE_71960]:_pmi_init:_pmi_mmap_init returned >>>>> -1 >>>>> [Mon Apr 13 17:49:45 2020] [c6-3c2s9n1] Fatal error in MPI_Init: Other >>>>> MPI error, error stack: >>>>> MPIR_Init_thread(537): >>>>> MPID_Init(246).......: channel initialization failed >>>>> >>>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Apr 15 16:47:07 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 17:47:07 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: On Wed, Apr 15, 2020 at 5:17 PM Satish Balay wrote: > The build should work. It should give some verbose info [at runtime] > regarding GPUs - from the following code. > > I don't see that and I am running GPUs in my code and have gotten cusparse LU to run. Should I use '-info :sys:' ? > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > void DisplayHeader() > { > const int kb = 1024; > const int mb = kb * kb; > // cout << "NBody.GPU" << endl << "=========" << endl << endl; > > printf("CUDA version: v %d\n",CUDART_VERSION); > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." << > THRUST_MINOR_VERSION << endl << endl; > > int devCount; > cudaGetDeviceCount(&devCount); > printf( "CUDA Devices: \n \n"); > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > Satish > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > > I remember Barry said superlu gpu support is broken. > > --Junchao Zhang > > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams wrote: > > > > > How does one use SuperLU with GPUs. I don't seem to get any GPU > > > performance data so I assume GPUs are not getting turned on. Am I wrong > > > about that? > > > > > > I configure with: > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp" > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC > -fopenmp" > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc > > > --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis > > > --download-superlu --download-superlu_dist --with-make-np=16 > > > --download-parmetis --download-triangle > > > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 > > > --with-64-bit-indices=0 --with-debugging=0 > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > > > --with-threadsaftey=1 --with-log=1 > > > > > > Thanks, > > > Mark > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 15 16:58:03 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 16:58:03 -0500 (CDT) Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: Please send configure.log This is what I get on my linux build: [balay at p1 petsc]$ ./configure --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 --with-openmp=1 --download-superlu-dist=1 && make && make check Running check examples to verify correct installation Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes 1a2,19 > CUDA version: v 10020 > CUDA Devices: > > 0 : Quadro T2000 7 5 > Global memory: 3911 mb > Shared memory: 48 kb > Constant memory: 64 kb > Block registers: 65536 > > CUDA version: v 10020 > CUDA Devices: > > 0 : Quadro T2000 7 5 > Global memory: 3911 mb > Shared memory: 48 kb > Constant memory: 64 kb > Block registers: 65536 > /home/balay/petsc/src/snes/tutorials Possible problem with ex19 running with superlu_dist, diffs above ========================================= Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process Completed test examples On Wed, 15 Apr 2020, Mark Adams wrote: > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay wrote: > > > The build should work. It should give some verbose info [at runtime] > > regarding GPUs - from the following code. > > > > > I don't see that and I am running GPUs in my code and have gotten cusparse > LU to run. Should I use '-info :sys:' ? > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > > void DisplayHeader() > > { > > const int kb = 1024; > > const int mb = kb * kb; > > // cout << "NBody.GPU" << endl << "=========" << endl << endl; > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." << > > THRUST_MINOR_VERSION << endl << endl; > > > > int devCount; > > cudaGetDeviceCount(&devCount); > > printf( "CUDA Devices: \n \n"); > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > > > Satish > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > > > > I remember Barry said superlu gpu support is broken. > > > --Junchao Zhang > > > > > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams wrote: > > > > > > > How does one use SuperLU with GPUs. I don't seem to get any GPU > > > > performance data so I assume GPUs are not getting turned on. Am I wrong > > > > about that? > > > > > > > > I configure with: > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp" > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC > > -fopenmp" > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc > > > > --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis > > > > --download-superlu --download-superlu_dist --with-make-np=16 > > > > --download-parmetis --download-triangle > > > > > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 > > > > --with-64-bit-indices=0 --with-debugging=0 > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > > > > --with-threadsaftey=1 --with-log=1 > > > > > > > > Thanks, > > > > Mark > > > > > > > > > > > > From balay at mcs.anl.gov Wed Apr 15 17:20:29 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 17:20:29 -0500 (CDT) Subject: [petsc-users] Status about using PETSC with MATLAB In-Reply-To: References: Message-ID: Some info is at https://www.mcs.anl.gov/petsc/documentation/faq.html#matlab Satish On Tue, 14 Apr 2020, Bao Kai wrote: > Hi, > > I saw some discussion in the mailing list, while not a lot. > > I am wondering the current status about using PETSc with MATLAB before > I dig in. > > To be short, Would I be able to use the non-linear solver or linear > solver with a MATLAB code? > > For example, I have a simulation code with MATLAB, I want to use/test > the non-linear solver or linear solver from PETSc. Is it something > doable or supported here? It is mostly for testing and study purposes. > The performance is not the main concern here. > > Thanks. > > Best Regards, > Kai Bao > From balay at mcs.anl.gov Wed Apr 15 17:31:12 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 17:31:12 -0500 (CDT) Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL In-Reply-To: <2E1654F2-55C4-4F8C-A06E-EBE74758E386@icloud.com> References: <2E1654F2-55C4-4F8C-A06E-EBE74758E386@icloud.com> Message-ID: >From prior e-mail - you wanted to use AMD GPU on OSX. This build below is CPU build - not for GPU. [Karl can confirm] I think OSX has OpenCL installed by default [perhaps via xcode?] - so you might just need the additional configure option: --with-opencl=1 Satish On Thu, 16 Apr 2020, huabel via petsc-users wrote: > Hi Satish, that patch is good, thank you! > > > > On Apr 15, 2020, at 23:58, Satish Balay wrote: > > > >> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --prefix=/Users/fire/opt/petsc313 --with-zlib --with-viennacl=1 --with-viennacl-dir=/Users/fire/opt/viennacl > > > > I guess you are running viennacl (opencl) on CPU. > > > > please try the attached patch. > > > > cd petsc > > patch -Np1 < viennacl.patch > > > > Or use branch balay/viennacl-cpu-check/maint in petsc repo > > > > Satish > > > > On Wed, 15 Apr 2020, huabel via petsc-users wrote: > > > >> Dear Users, > >> > >> I?m try to use petsc3.13 with ViennaCL , when I try to run src/vec/vec/tutorials/ex1.c, I get next error, thanks. > >> > >> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL > >> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > >> Expected in: flat namespace > >> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > >> [1] 22602 abort ./ex1 > >> > >> ? tutorials git:(master) ? ./ex1 -vec_type viennacl -mat_type aijviennacl > >> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL > >> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > >> Expected in: flat namespace > >> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib > >> [1] 23268 abort ./ex1 -vec_type viennacl -mat_type aijviennacl > >> > >> > >> > >> Thanks > >> Abel Hu > >> > > > From balay at mcs.anl.gov Wed Apr 15 17:37:04 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 17:37:04 -0500 (CDT) Subject: [petsc-users] error with xlib In-Reply-To: References: Message-ID: > clang: error: unsupported option '-fopenmp' If you need openmp build on OSX - perhaps its best to install with brew/gcc - that actually accepts -fopenmp option? Or - install all externalpackages - that don't need openmp without openmp? step-1 ./configure --download-hdf5 PETSC_ARCH=arch-prebuid --prefix=$HOME/packages step-2 ./configure --with-hdf5-dir=$HOME/packages --with-openmp=1 Satish On Mon, 13 Apr 2020, Mark Adams wrote: > Ah, you have zlib=1. Now hdf5 fails. > > On Mon, Apr 13, 2020 at 1:19 PM Satish Balay wrote: > > > And here is a p4est build. > > > > Satish > > -------- > > > > balay at kpro petsc % ./configure --with-mpi-dir=$HOME/soft/mpich-3.3.2 > > --with-zlib=1 --download-p4est > > > > =============================================================================== > > Configuring PETSc to compile on your system > > > > > > =============================================================================== > > =============================================================================== > > > > ***** WARNING: You have an older version of Gnu make, it will > > work, > > but may not support all the parallel testing > > options. You can install the > > latest Gnu make with your > > package manager, such as brew or macports, or use > > the > > --download-make option to get the latest Gnu make ***** > > > > > > =============================================================================== > > > > ====== > > ========================================================================= > > > > Trying to download git:// > > https://bitbucket.org/petsc/pkg-sowing.git for SOWING > > > > =============================================================================== > > > > > > =============================================================================== > > > > Running configure on SOWING; this may take several minutes > > > > > > =============================================================================== > > > > =========== > > ==================================================================== > > > > Running make on SOWING; this may take several minutes > > > > > > =============================================================================== > > > > > > =============================================================================== > > > > Running make install on SOWING; this may take several minutes > > > > > > =============================================================================== > > > > ================ > > =============================================================== > > > > Trying to download git://https://github.com/tisaac/p4est for P4EST > > > > > > =============================================================================== > > > > > > =============================================================================== > > > > Trying to bootstrap p4est using autotools; this may take > > several minutes > > > > =============================================================================== > > > > ===================== > > ========================================================== > > > > Running configure on P4EST; this may take several minutes > > > > > > =============================================================================== > > > > > > =============================================================================== > > > > Running make on P4EST; this may take several minutes > > > > > > =============================================================================== > > > > ========================== > > ===================================================== > > > > Running make install on P4EST; this may take several minutes > > > > > > =============================================================================== > > > > Compilers: > > > > > > C Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpicc -Wall > > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > > Version: Apple clang version 11.0.0 (clang-1100.0.33.8) > > C++ Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpicxx -Wall > > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > > -fno-stack-check -fvisibility=hidden -g > > Version: Apple clang version 11.0.0 (clang-1100.0.33.8) > > Fortran Compiler: /Users/balay/soft/mpich-3.3.2/bin/mpif90 -Wall > > -ffree-line-length-0 -Wno-unused-dummy-argument -g > > Version: GNU Fortran (Homebrew GCC 9.3.0) 9.3.0 > > Linkers: > > Shared linker: /Users/balay/soft/mpich-3.3.2/bin/mpicc -dynamiclib > > -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall > > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > > Dynamic linker: /Users/balay/soft/mpich-3.3.2/bin/mpicc -dynamiclib > > -single_module -undefined dynamic_lookup -multiply_defined suppress -Wall > > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > > Libraries linked against: -lc++ -ldl > > make: > > Version: 3.81 > > /usr/bin/make > > BlasLapack: > > Library: -llapack -lblas > > Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> > > yourprogram -log_view) > > uses 4 byte integers > > MPI: > > Version: 3 > > Includes: -I/Users/balay/soft/mpich-3.3.2/include > > Mpiexec: /Users/balay/soft/mpich-3.3.2/bin/mpiexec > > MPICH_NUMVERSION: 30302300 > > pthread: > > X: > > Includes: -I/opt/X11/include > > Library: -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 > > zlib: > > Library: -lz > > cmake: > > Version: 3.16.5 > > /usr/local/bin/cmake > > regex: > > p4est: > > Includes: -I/Users/balay/petsc/arch-darwin-c-debug/include > > Library: -Wl,-rpath,/Users/balay/petsc/arch-darwin-c-debug/lib > > -L/Users/balay/petsc/arch-darwin-c-debug/lib -lp4est -lsc > > sowing: > > Version: 1.1.25 > > /Users/balay/petsc/arch-darwin-c-debug/bin/bfort > > Language used to compile PETSc: C > > PETSc: > > PETSC_ARCH: arch-darwin-c-debug > > PETSC_DIR: /Users/balay/petsc > > Scalar type: real > > Precision: double > > Integer size: 4 bytes > > shared libraries: enabled > > Memory alignment from malloc(): 16 bytes > > > > xxx=========================================================================xxx > > Configure stage complete. Now build PETSc libraries with: > > make PETSC_DIR=/Users/balay/petsc PETSC_ARCH=arch-darwin-c-debug all > > > > xxx=========================================================================xxx > > balay at kpro petsc % > > > > > > On Mon, 13 Apr 2020, Satish Balay via petsc-users wrote: > > > > > you haven't sent any logs for this issue.. > > > [../arch-macosx-gnu-O-omp.py script or configure.log with the failure] > > > > > > Satish > > > > > > ------- > > > ipro:petsc balay$ ./configure --with-fortran-bindings=0 --with-mpi=0 > > --with-zlib=1 > > > > > =============================================================================== > > > Configuring PETSc to compile on your system > > > > > > > =============================================================================== > > > > > =============================================================================== > > > > ***** WARNING: You have an older version of Gnu make, it will > > work, > > but may not support all the parallel testing > > options. You can install the > > latest Gnu make with your > > package manager, such as brew or macports, or use > > the > > --download-make option to get the latest Gnu make ***** > > > > > > =============================================================================== > > > > Comp > > il > > > ers: > > > > > > > C Compiler: gcc -Wall -Wwrite-strings -Wno-strict-aliasing > > -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments > > -fvisibility=hidden -g3 > > > Version: Apple clang version 11.0.3 (clang-1103.0.32.29) > > > C++ Compiler: g++ -Wall -Wwrite-strings -Wno-strict-aliasing > > -Wno-unknown-pragmas -fstack-protector -fno-stack-check -fvisibility=hidden > > -g -std=c++14 > > > Version: Apple clang version 11.0.3 (clang-1103.0.32.29) > > > Fortran Compiler: gfortran -Wall -ffree-line-length-0 > > -Wno-unused-dummy-argument -g > > > Version: GNU Fortran (Homebrew GCC 9.3.0) 9.3.0 > > > Linkers: > > > Shared linker: gcc -dynamiclib -single_module -undefined > > dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings > > -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > > > Dynamic linker: gcc -dynamiclib -single_module -undefined > > dynamic_lookup -multiply_defined suppress -Wall -Wwrite-strings > > -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > > -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 > > > Libraries linked against: -lc++ -ldl > > > make: > > > Version: 3.81 > > > /usr/bin/make > > > BlasLapack: > > > Library: -llapack -lblas > > > Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> > > yourprogram -log_view) > > > uses 4 byte integers > > > pthread: > > > zlib: > > > Library: -lz > > > cmake: > > > Version: 3.16.5 > > > /usr/local/bin/cmake > > > X: > > > Includes: -I/opt/X11/include > > > Library: -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 > > > regex: > > > Language used to compile PETSc: C > > > PETSc: > > > PETSC_ARCH: arch-darwin-c-debug > > > PETSC_DIR: /Users/balay/petsc > > > Scalar type: real > > > Precision: double > > > Integer size: 4 bytes > > > shared libraries: enabled > > > Memory alignment from malloc(): 16 bytes > > > > > xxx=========================================================================xxx > > > Configure stage complete. Now build PETSc libraries with: > > > make PETSC_DIR=/Users/balay/petsc PETSC_ARCH=arch-darwin-c-debug all > > > > > xxx=========================================================================xxx > > > ipro:petsc balay$ > > > > > > > > > > > > > > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > > > > > On Mon, Apr 13, 2020 at 12:48 PM Satish Balay > > wrote: > > > > > > > > > This is very funky > > > > > > > > > > >>> > > > > > Configure Options: --configModules=PETSc.Configure > > > > > --optionsModule=config.compilerOptions > > > > > --with-mpi-dir=/usr/local/Cellar/mpich/3.3.2 COPTFLAGS="-O2 -g > > > > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" -L"$(brew > > > > > --prefix libomp)/lib -lomp"" CXXOPTFLAGS="-O2 -g -Xpreprocessor > > -fopenmp > > > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix libomp)/lib > > -lomp"" > > > > > FOPTFLAGS="-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > > > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp"" > > > > > --download-parmetis=1 --download-metis=1 --download-hypre=1 > > > > > --download-triangle=1 --download-p4est=1 --download-zlib --with-x=0 > > > > > --download-ctetgen --with-debugging=0 --download-hdf5=1 > > > > > PETSC_ARCH=arch-macosx-gnu-O-omp --with-openmp=1 --with-log=0 > > > > > --with-threadsafety --download-chaco > > > > > <<< > > > > > > > > > > -I"$(brew --prefix libomp)/include" type options to configure > > doesn't make > > > > > sense. You are using bash syntax here - and expecting configure to > > resolve > > > > > it. Its best for your bash shell to evaluate this before passing > > this info > > > > > to configure > > > > > > > > > > Also --download-zlib isn't needed on OSX > > > > > > > > > > > > > Hum, I get: > > > > > > > > 12:52 mark/feature-xgc-interface-rebase *= ~/Codes/petsc$ > > > > ../arch-macosx-gnu-O-omp.py > > > > > > =============================================================================== > > > > Configuring PETSc to compile on your system > > > > > > > > > > =============================================================================== > > > > TESTING: configureExternalPackagesDir from > > > > config.framework(config/BuildSystem/config/framework.py:911) > > > > > > > > > > > > > > ******************************************************************************* > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > > for > > > > details): > > > > > > ------------------------------------------------------------------------------- > > > > Package p4est requested but dependency zlib not requested. Perhaps you > > want > > > > --download-zlib > > > > > > ******************************************************************************* > > > > > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > On Mon, 13 Apr 2020, Mark Adams wrote: > > > > > > > > > > > Now that I look at it, I see: > > > > > > > > > > > > CFLAGS="-fstack-protector -fno-stack-check -Qunused-arguments -O2 > > -g > > > > > > -Xpreprocessor -fopenmp -I"$(brew --prefix libomp)/include" > > -L"$(brew > > > > > > --prefix libomp)/lib -lomp"" > > > > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > > > > > > > > > Note the two ". That does not look right. I use > > > > > > > > > > > > 'COPTFLAGS=-O2 -g -Xpreprocessor -fopenmp -I"$(brew --prefix > > > > > > libomp)/include" -L"$(brew --prefix libomp)/lib -lomp" ', > > > > > > > > > > > > I know how to do stuff like: > > > > > > > > > > > > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] > > + > > > > > > '/lib64 -lblas -llapack' > > > > > > > > > > > > Is there like and os.exec that I could use like this for my FLAGS? > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Apr 13, 2020 at 11:46 AM Matthew Knepley < > > knepley at gmail.com> > > > > > wrote: > > > > > > > > > > > > > On Mon, Apr 13, 2020 at 11:34 AM Mark Adams > > wrote: > > > > > > > > > > > > > >> I get this error configuring zlib, osx, with OpenMP. > > > > > > >> Any ideas? > > > > > > >> > > > > > > > > > > > > > > This failed without output > > > > > > > > > > > > > > Executing: cd > > > > > > > > > > > > > > /Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp/externalpackages/zlib-1.2.11 > > > > > > > && CC="/usr/local/Cellar/mpich/3.3.2/bin/mpicc" > > > > > CFLAGS="-fstack-protector > > > > > > > -fno-stack-check -Qunused-arguments -O2 -g -Xpreprocessor > > -fopenmp > > > > > > > -I"$(brew --prefix libomp)/include" -L"$(brew --prefix > > libomp)/lib > > > > > -lomp"" > > > > > > > prefix="/Users/markadams/Codes/petsc/arch-macosx-gnu-O-omp" > > > > > ./configure && > > > > > > > /usr/bin/make -j7 -l12.0 && /usr/bin/make install > > > > > > > > > > > > > > So execute each step in turn and see what fails. > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > Matt > > > > > > > > > > > > > > > > > > > > >> Thanks, > > > > > > >> Mark > > > > > > >> > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > What most experimenters take for granted before they begin their > > > > > > > experiments is infinitely more interesting than any results to > > which > > > > > their > > > > > > > experiments lead. > > > > > > > -- Norbert Wiener > > > > > > > > > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From mfadams at lbl.gov Wed Apr 15 19:17:08 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Apr 2020 20:17:08 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: Ah, OK 'check' will test SuperLU. Semi worked: s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp check Running check examples to verify correct installation Using PETSC_DIR=/ccs/home/adams/petsc and PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes 2c2,38 < Number of SNES iterations = 2 --- > CUDA version: v 10010 > CUDA Devices: > > 0 : Tesla V100-SXM2-16GB 7 0 > Global memory: 16128 mb > Shared memory: 48 kb > Constant memory: 64 kb > Block registers: 65536 > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion `cacheNode != __null' failed. > [h16n07:78357] *** Process received signal *** > [h16n07:78357] Signal: Aborted (6) > [h16n07:78357] Signal code: (1704218624) > [h16n07:78357] [ 0] [0x2000000504d8] > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] > [h16n07:78357] [ 3] /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] > [h16n07:78357] [ 4] /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > [h16n07:78357] [ 5] /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > [h16n07:78357] [ 6] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] > [h16n07:78357] [ 7] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] > [h16n07:78357] [ 8] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] > [h16n07:78357] [ 9] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] > [h16n07:78357] [10] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] > [h16n07:78357] [11] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] > [h16n07:78357] [12] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] > [h16n07:78357] [13] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] > [h16n07:78357] [14] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] > [h16n07:78357] [15] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] > [h16n07:78357] [16] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] > [h16n07:78357] [17] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] > [h16n07:78357] [18] /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] > [h16n07:78357] [19] ./ex19[0x10002ac8] > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] > [h16n07:78357] [21] /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] > [h16n07:78357] *** End of error message *** > ERROR: One or more process (first noticed rank 0) terminated with signal 6 /ccs/home/adams/petsc/src/snes/tutorials Possible problem with ex19 running with superlu_dist, diffs above ========================================= On Wed, Apr 15, 2020 at 5:58 PM Satish Balay wrote: > Please send configure.log > > This is what I get on my linux build: > > [balay at p1 petsc]$ ./configure > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 > --with-openmp=1 --download-superlu-dist=1 && make && make check > > Running check examples to verify correct installation > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > 1a2,19 > > CUDA version: v 10020 > > CUDA Devices: > > > > 0 : Quadro T2000 7 5 > > Global memory: 3911 mb > > Shared memory: 48 kb > > Constant memory: 64 kb > > Block registers: 65536 > > > > CUDA version: v 10020 > > CUDA Devices: > > > > 0 : Quadro T2000 7 5 > > Global memory: 3911 mb > > Shared memory: 48 kb > > Constant memory: 64 kb > > Block registers: 65536 > > > /home/balay/petsc/src/snes/tutorials > Possible problem with ex19 running with superlu_dist, diffs above > ========================================= > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process > Completed test examples > > > On Wed, 15 Apr 2020, Mark Adams wrote: > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay wrote: > > > > > The build should work. It should give some verbose info [at runtime] > > > regarding GPUs - from the following code. > > > > > > > > I don't see that and I am running GPUs in my code and have gotten > cusparse > > LU to run. Should I use '-info :sys:' ? > > > > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > > > void DisplayHeader() > > > { > > > const int kb = 1024; > > > const int mb = kb * kb; > > > // cout << "NBody.GPU" << endl << "=========" << endl << endl; > > > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." << > > > THRUST_MINOR_VERSION << endl << endl; > > > > > > int devCount; > > > cudaGetDeviceCount(&devCount); > > > printf( "CUDA Devices: \n \n"); > > > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > > > > > Satish > > > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > > > > > > I remember Barry said superlu gpu support is broken. > > > > --Junchao Zhang > > > > > > > > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams wrote: > > > > > > > > > How does one use SuperLU with GPUs. I don't seem to get any GPU > > > > > performance data so I assume GPUs are not getting turned on. Am I > wrong > > > > > about that? > > > > > > > > > > I configure with: > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp" > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC > > > -fopenmp" > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > --with-cxx=mpicxx > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 > --download-metis > > > > > --download-superlu --download-superlu_dist --with-make-np=16 > > > > > --download-parmetis --download-triangle > > > > > > > > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 > --with-x=0 > > > > > --with-64-bit-indices=0 --with-debugging=0 > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > > > > > --with-threadsaftey=1 --with-log=1 > > > > > > > > > > Thanks, > > > > > Mark > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbllm2018 at hotmail.com Wed Apr 15 20:30:16 2020 From: lbllm2018 at hotmail.com (Bin Liu) Date: Thu, 16 Apr 2020 01:30:16 +0000 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: References: Message-ID: Sure. Thanks for your information. Actually, I used to call MatSetValues for each column. Since my matrix is sparse and I am not sure whether the non-zeros along each column are identical for each row, I am currently wondering if it is possible to insert the entire local dense matrix into the global sparse matrix using a single MatSetValues routine. Would it be more efficient than I identify the nonzeros and inserting them row-by-row? I may also set the ?IGNORE_ZERO_ENTRIES? for the global sparse matrix before ?MatSetValues?. Would it be helpful? Thanks for your helps. From: Matthew Knepley [mailto:knepley at gmail.com] Sent: Wednesday, 15 April 2020 7:01 PM To: Bin Liu Cc: Junchao Zhang ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] inserting multiple rows together into a matrix On Wed, Apr 15, 2020 at 1:24 AM Bin Liu > wrote: Thanks for your example. My problem is resolved. Meanwhile I am wondering, if it is possible to make this example more flexible. I mean what if the columns in each row are different? Is there any way to insert them all together? No. If the columns are different, you make a separate calls to MatSetValues(). Thanks, Matt Regards Bin From: Junchao Zhang [mailto:junchao.zhang at gmail.com] Sent: Monday, 13 April 2020 11:33 PM To: Bin Liu > Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] inserting multiple rows together into a matrix Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 m=2; n=3; idxm[] = {2, 4}; idxn[] = {3, 7, 9}; v[6] = {0.1, 0.2, ....}; MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); --Junchao Zhang On Mon, Apr 13, 2020 at 9:59 AM Bin Liu > wrote: Hi all, I know how to insert values in one row into the matrix via routine ?MatSetValues?. I understand I logically should be able to insert multiple rows into the matrix with one call of ?MatSetValues?. However, I am not sure how to do it. I searched in the PETSc mail list and did not find a relevant question answered before. Could anyone help me and give me a simple example code? Regards B. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hu.ds.abel at icloud.com Wed Apr 15 20:49:31 2020 From: hu.ds.abel at icloud.com (huabel) Date: Thu, 16 Apr 2020 09:49:31 +0800 Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL In-Reply-To: References: <2E1654F2-55C4-4F8C-A06E-EBE74758E386@icloud.com> Message-ID: > On Apr 16, 2020, at 06:31, Satish Balay wrote: > > From prior e-mail - you wanted to use AMD GPU on OSX. This build below is CPU build - not for GPU. [Karl can confirm] > > I think OSX has OpenCL installed by default [perhaps via xcode?] - so you might just need the additional configure option: --with-opencl=1 > I tried to remove ?--with-viennacl=1 --with-viennacl-dir=?, just use ?--with-opencl=1?, that good for build petsc and examples, but when I run "./examples -h ? there is no message about OpenCL or ViennaCL, so my how can I know it used OpenCL (GPU)? Abel From knepley at gmail.com Wed Apr 15 20:50:53 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Apr 2020 21:50:53 -0400 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: References: Message-ID: On Wed, Apr 15, 2020 at 9:30 PM Bin Liu wrote: > Sure. Thanks for your information. Actually, I used to call MatSetValues > for each column. Since my matrix is sparse and I am not sure whether the > non-zeros along each column are identical for each row, I am currently > wondering if it is possible to insert the entire local dense matrix into > the global sparse matrix using a single MatSetValues routine. Would it be > more efficient than I identify the nonzeros and inserting them row-by-row? > I may also set the ?IGNORE_ZERO_ENTRIES? for the global sparse matrix > before ?MatSetValues?. Would it be helpful? > Inserting a dense matrix is likely the wrong thing to do. Generally row-by-row insertion is good. Thanks, Matt > Thanks for your helps. > > > > *From:* Matthew Knepley [mailto:knepley at gmail.com] > *Sent:* Wednesday, 15 April 2020 7:01 PM > *To:* Bin Liu > *Cc:* Junchao Zhang ; petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] inserting multiple rows together into a > matrix > > > > On Wed, Apr 15, 2020 at 1:24 AM Bin Liu wrote: > > Thanks for your example. My problem is resolved. Meanwhile I am wondering, > if it is possible to make this example more flexible. I mean what if the > columns in each row are different? Is there any way to insert them all > together? > > > > No. If the columns are different, you make a separate calls to > MatSetValues(). > > > > Thanks, > > > > Matt > > > > > > Regards > > Bin > > > > *From:* Junchao Zhang [mailto:junchao.zhang at gmail.com] > *Sent:* Monday, 13 April 2020 11:33 PM > *To:* Bin Liu > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] inserting multiple rows together into a > matrix > > > > Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 > > m=2; > > n=3; > > idxm[] = {2, 4}; > > idxn[] = {3, 7, 9}; > > v[6] = {0.1, 0.2, ....}; > > MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); > > > --Junchao Zhang > > > > > > On Mon, Apr 13, 2020 at 9:59 AM Bin Liu wrote: > > Hi all, > > > > I know how to insert values in one row into the matrix via routine > ?MatSetValues?. I understand I logically should be able to insert multiple > rows into the matrix with one call of ?MatSetValues?. However, I am not > sure how to do it. I searched in the PETSc mail list and did not find a > relevant question answered before. Could anyone help me and give me a > simple example code? > > > > Regards > > B. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbllm2018 at hotmail.com Wed Apr 15 20:54:02 2020 From: lbllm2018 at hotmail.com (Bin Liu) Date: Thu, 16 Apr 2020 01:54:02 +0000 Subject: [petsc-users] inserting multiple rows together into a matrix In-Reply-To: References: Message-ID: Sure. Noted. Thank you very much for your help. From: Matthew Knepley [mailto:knepley at gmail.com] Sent: Thursday, 16 April 2020 9:51 AM To: Bin Liu Cc: Junchao Zhang ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] inserting multiple rows together into a matrix On Wed, Apr 15, 2020 at 9:30 PM Bin Liu > wrote: Sure. Thanks for your information. Actually, I used to call MatSetValues for each column. Since my matrix is sparse and I am not sure whether the non-zeros along each column are identical for each row, I am currently wondering if it is possible to insert the entire local dense matrix into the global sparse matrix using a single MatSetValues routine. Would it be more efficient than I identify the nonzeros and inserting them row-by-row? I may also set the ?IGNORE_ZERO_ENTRIES? for the global sparse matrix before ?MatSetValues?. Would it be helpful? Inserting a dense matrix is likely the wrong thing to do. Generally row-by-row insertion is good. Thanks, Matt Thanks for your helps. From: Matthew Knepley [mailto:knepley at gmail.com] Sent: Wednesday, 15 April 2020 7:01 PM To: Bin Liu > Cc: Junchao Zhang >; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] inserting multiple rows together into a matrix On Wed, Apr 15, 2020 at 1:24 AM Bin Liu > wrote: Thanks for your example. My problem is resolved. Meanwhile I am wondering, if it is possible to make this example more flexible. I mean what if the columns in each row are different? Is there any way to insert them all together? No. If the columns are different, you make a separate calls to MatSetValues(). Thanks, Matt Regards Bin From: Junchao Zhang [mailto:junchao.zhang at gmail.com] Sent: Monday, 13 April 2020 11:33 PM To: Bin Liu > Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] inserting multiple rows together into a matrix Add two rows 2 ,4, and each row has three nonzeros at column 3, 7, 9 m=2; n=3; idxm[] = {2, 4}; idxn[] = {3, 7, 9}; v[6] = {0.1, 0.2, ....}; MatSetValues(mat, m, idxm, n, idxn,v, INSERT_VALUES); --Junchao Zhang On Mon, Apr 13, 2020 at 9:59 AM Bin Liu > wrote: Hi all, I know how to insert values in one row into the matrix via routine ?MatSetValues?. I understand I logically should be able to insert multiple rows into the matrix with one call of ?MatSetValues?. However, I am not sure how to do it. I searched in the PETSc mail list and did not find a relevant question answered before. Could anyone help me and give me a simple example code? Regards B. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 15 20:58:57 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 20:58:57 -0500 (CDT) Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: The crash is inside Superlu_DIST - so don't know what to suggest. Might have to debug this via debugger and check with Sherry. Satish On Wed, 15 Apr 2020, Mark Adams wrote: > Ah, OK 'check' will test SuperLU. Semi worked: > > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make > PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > check > Running check examples to verify correct installation > Using PETSC_DIR=/ccs/home/adams/petsc and > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > 2c2,38 > < Number of SNES iterations = 2 > --- > > CUDA version: v 10010 > > CUDA Devices: > > > > 0 : Tesla V100-SXM2-16GB 7 0 > > Global memory: 16128 mb > > Shared memory: 48 kb > > Constant memory: 64 kb > > Block registers: 65536 > > > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion > `cacheNode != __null' failed. > > [h16n07:78357] *** Process received signal *** > > [h16n07:78357] Signal: Aborted (6) > > [h16n07:78357] Signal code: (1704218624) > > [h16n07:78357] [ 0] [0x2000000504d8] > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] > > [h16n07:78357] [ 3] /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] > > [h16n07:78357] [ 4] > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > [h16n07:78357] [ 5] > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > [h16n07:78357] [ 6] > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] > > [h16n07:78357] [ 7] > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] > > [h16n07:78357] [ 8] > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] > > [h16n07:78357] [ 9] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] > > [h16n07:78357] [10] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] > > [h16n07:78357] [11] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] > > [h16n07:78357] [12] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] > > [h16n07:78357] [13] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] > > [h16n07:78357] [14] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] > > [h16n07:78357] [15] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] > > [h16n07:78357] [16] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] > > [h16n07:78357] [17] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] > > [h16n07:78357] [18] > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] > > [h16n07:78357] [19] ./ex19[0x10002ac8] > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] > > [h16n07:78357] [21] > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] > > [h16n07:78357] *** End of error message *** > > ERROR: One or more process (first noticed rank 0) terminated with signal > 6 > /ccs/home/adams/petsc/src/snes/tutorials > Possible problem with ex19 running with superlu_dist, diffs above > ========================================= > > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay wrote: > > > Please send configure.log > > > > This is what I get on my linux build: > > > > [balay at p1 petsc]$ ./configure > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 > > --with-openmp=1 --download-superlu-dist=1 && make && make check > > > > Running check examples to verify correct installation > > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > > 1a2,19 > > > CUDA version: v 10020 > > > CUDA Devices: > > > > > > 0 : Quadro T2000 7 5 > > > Global memory: 3911 mb > > > Shared memory: 48 kb > > > Constant memory: 64 kb > > > Block registers: 65536 > > > > > > CUDA version: v 10020 > > > CUDA Devices: > > > > > > 0 : Quadro T2000 7 5 > > > Global memory: 3911 mb > > > Shared memory: 48 kb > > > Constant memory: 64 kb > > > Block registers: 65536 > > > > > /home/balay/petsc/src/snes/tutorials > > Possible problem with ex19 running with superlu_dist, diffs above > > ========================================= > > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process > > Completed test examples > > > > > > On Wed, 15 Apr 2020, Mark Adams wrote: > > > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay wrote: > > > > > > > The build should work. It should give some verbose info [at runtime] > > > > regarding GPUs - from the following code. > > > > > > > > > > > I don't see that and I am running GPUs in my code and have gotten > > cusparse > > > LU to run. Should I use '-info :sys:' ? > > > > > > > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > > > > void DisplayHeader() > > > > { > > > > const int kb = 1024; > > > > const int mb = kb * kb; > > > > // cout << "NBody.GPU" << endl << "=========" << endl << endl; > > > > > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." << > > > > THRUST_MINOR_VERSION << endl << endl; > > > > > > > > int devCount; > > > > cudaGetDeviceCount(&devCount); > > > > printf( "CUDA Devices: \n \n"); > > > > > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > > > > > > > Satish > > > > > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > > > > > > > > I remember Barry said superlu gpu support is broken. > > > > > --Junchao Zhang > > > > > > > > > > > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams wrote: > > > > > > > > > > > How does one use SuperLU with GPUs. I don't seem to get any GPU > > > > > > performance data so I assume GPUs are not getting turned on. Am I > > wrong > > > > > > about that? > > > > > > > > > > > > I configure with: > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp" > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC > > > > -fopenmp" > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > > --with-cxx=mpicxx > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 > > --download-metis > > > > > > --download-superlu --download-superlu_dist --with-make-np=16 > > > > > > --download-parmetis --download-triangle > > > > > > > > > > > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 > > --with-x=0 > > > > > > --with-64-bit-indices=0 --with-debugging=0 > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > > > > > > --with-threadsaftey=1 --with-log=1 > > > > > > > > > > > > Thanks, > > > > > > Mark > > > > > > > > > > > > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Wed Apr 15 21:04:48 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 15 Apr 2020 21:04:48 -0500 (CDT) Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL In-Reply-To: References: <2E1654F2-55C4-4F8C-A06E-EBE74758E386@icloud.com> Message-ID: On Thu, 16 Apr 2020, huabel via petsc-users wrote: > > > > On Apr 16, 2020, at 06:31, Satish Balay wrote: > > > > From prior e-mail - you wanted to use AMD GPU on OSX. This build below is CPU build - not for GPU. [Karl can confirm] > > > > I think OSX has OpenCL installed by default [perhaps via xcode?] - so you might just need the additional configure option: --with-opencl=1 > > > I tried to remove ?--with-viennacl=1 --with-viennacl-dir=?, just use ?--with-opencl=1?, that good for build petsc and examples, but when I run "./examples -h ? there is no message about OpenCL or ViennaCL, so my how can I know it used OpenCL (GPU)? PETSc does not directly use opencl. Its needed for viennacl. So if you remove viennacl from petsc build - then sure - there won't be any viennacl or opencl messages. If you have a build of petsc with viennacl+opencl - you can try running examples with viennacl (manually). $ git grep requires: |grep viennacl src/ksp/ksp/tests/ex43.c: requires: viennacl datafilespath double !complex !define(PETSC_USE_64BIT_INDICES) src/ksp/ksp/tests/ex43.c: requires: viennacl datafilespath double !complex !define(PETSC_USE_64BIT_INDICES) src/ksp/ksp/tutorials/ex59.c: requires: viennacl src/ksp/ksp/tutorials/ex7.c: requires: viennacl src/ksp/ksp/tutorials/ex7.c: requires: viennacl src/ksp/ksp/tutorials/ex71.c: requires: mumps cuda viennacl src/ksp/ksp/tutorials/ex71.c: requires: mumps cuda viennacl src/ksp/ksp/tutorials/ex71.c: requires: mkl_pardiso cuda viennacl src/ksp/ksp/tutorials/ex71.c: requires: viennacl src/ksp/ksp/tutorials/ex72.c: requires: viennacl src/mat/tests/ex1.c: requires: cuda viennacl src/mat/tests/ex204.c: requires: viennacl src/mat/tests/ex23.c: requires: viennacl src/mat/tests/ex301.c: requires: viennacl src/mat/tests/ex301.c: requires: viennacl src/snes/tutorials/ex12.c: requires: !single viennacl src/snes/tutorials/ex69.c: requires: viennacl src/snes/tutorials/ex69.c: requires: viennacl src/snes/tutorials/ex69.c: requires: viennacl src/vec/vec/tests/ex22.c: requires: viennacl src/vec/vec/tests/ex23.c: requires: viennacl src/vec/vec/tests/ex24.c: requires: viennacl src/vec/vec/tests/ex34.c: requires: viennacl src/vec/vec/tests/ex38.c: requires: viennacl You can check some of the above examples - or run 'make alltests' [or its variants - 'make -j2 test'] that run all tests including viennacl tests. Satish From rupp at iue.tuwien.ac.at Wed Apr 15 21:43:57 2020 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Thu, 16 Apr 2020 04:43:57 +0200 Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL In-Reply-To: References: <2E1654F2-55C4-4F8C-A06E-EBE74758E386@icloud.com> Message-ID: Hi, yes, Satish is right, this build is a CPU-build. Add `--with-opencl=1` :-) Best regards, Karli On 4/16/20 12:31 AM, Satish Balay wrote: > From prior e-mail - you wanted to use AMD GPU on OSX. This build below is CPU build - not for GPU. [Karl can confirm] > > I think OSX has OpenCL installed by default [perhaps via xcode?] - so you might just need the additional configure option: --with-opencl=1 > > Satish > > On Thu, 16 Apr 2020, huabel via petsc-users wrote: > >> Hi Satish, that patch is good, thank you! >> >> >>> On Apr 15, 2020, at 23:58, Satish Balay wrote: >>> >>>> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --prefix=/Users/fire/opt/petsc313 --with-zlib --with-viennacl=1 --with-viennacl-dir=/Users/fire/opt/viennacl >>> >>> I guess you are running viennacl (opencl) on CPU. >>> >>> please try the attached patch. >>> >>> cd petsc >>> patch -Np1 < viennacl.patch >>> >>> Or use branch balay/viennacl-cpu-check/maint in petsc repo >>> >>> Satish >>> >>> On Wed, 15 Apr 2020, huabel via petsc-users wrote: >>> >>>> Dear Users, >>>> >>>> I?m try to use petsc3.13 with ViennaCL , when I try to run src/vec/vec/tutorials/ex1.c, I get next error, thanks. >>>> >>>> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL >>>> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> Expected in: flat namespace >>>> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> [1] 22602 abort ./ex1 >>>> >>>> ? tutorials git:(master) ? ./ex1 -vec_type viennacl -mat_type aijviennacl >>>> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL >>>> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> Expected in: flat namespace >>>> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> [1] 23268 abort ./ex1 -vec_type viennacl -mat_type aijviennacl >>>> >>>> >>>> >>>> Thanks >>>> Abel Hu >>>> >>> From mfadams at lbl.gov Thu Apr 16 06:59:24 2020 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 16 Apr 2020 07:59:24 -0400 Subject: [petsc-users] CUDA error In-Reply-To: <1A17CE7C-F799-4A84-9068-8B6598DF8D5B@gmail.com> References: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> <1A17CE7C-F799-4A84-9068-8B6598DF8D5B@gmail.com> Message-ID: On Wed, Apr 15, 2020 at 3:18 PM Stefano Zampini wrote: > > > On Apr 15, 2020, at 10:14 PM, Mark Adams wrote: > > Thanks, it looks correct. I am getting memory leaks (appended) > > And something horrible is going on with performance: > > MatLUFactorNum 130 1.0 9.2220e+00 1.0 6.51e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 30 0 0 0 0 30 0 0 0 0 71 0 390 3.33e+02 0 > 0.00e+00 0 > > > MatLUFactorNum 130 1.0 6.5177e-01 1.0 1.28e+09 1.0 0.0e+00 0.0e+00 > 0.0e+00 4 1 0 0 0 4 1 0 0 0 1966 0 0 0.00e+00 0 > 0.00e+00 0 > > > Can you describe these numbers? It seems that in the second case the > factorization is run on the CPU (as I explained in my previous message) > > This is with and without cusparse. So yes the second case is the normal CPU solver. Oh, the factorization is not done on the GPU. So only the solves are on the GPU. Alright that is not useful for me. Thanks, > This is not urgent, but I'd like to get a serial LU GPU solver at > some point. > > > Thanks again, > Mark > > Lots of these: > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in > /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ > veccuda2.cu > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in > /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ > veccuda2.cu > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in > /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ > veccuda2.cu > > > Yes, as I said, the code is in bad shape. I?ll see what I can do. > > On Wed, Apr 15, 2020 at 12:47 PM Stefano Zampini < > stefano.zampini at gmail.com> wrote: > >> Mark >> >> attached is the patch. I will open an MR in the next days if you confirm >> it is working for you >> The issue is that CUSPARSE does not have a way to compute the triangular >> factors, so we demand the computation of the factors to PETSc (CPU). These >> factors are then copied to the GPU. >> What was happening in the second step of SNES, was that the factors were >> never updated since the offloadmask was never updated. >> >> The real issue is that the CUSPARSE support in PETSc is really in bad >> shape and mostly untested, with coding solutions that are probably outdated >> now. >> I'll see what I can do to fix the class if I have time in the next weeks. >> >> Stefano >> >> Il giorno mer 15 apr 2020 alle ore 17:21 Mark Adams ha >> scritto: >> >>> >>> >>> On Wed, Apr 15, 2020 at 8:24 AM Stefano Zampini < >>> stefano.zampini at gmail.com> wrote: >>> >>>> Mark >>>> >>>> I have fixed few things in the solver and it is tested with the current >>>> master. >>>> >>> >>> I rebased with master over the weekend .... >>> >>> >>>> Can you write a MWE to reproduce the issue? Which version of CUDA and >>>> CUSPARSE are you using? >>>> >>> >>> You can use mark/feature-xgc-interface-rebase branch and add '-mat_type >>> seqaijcusparse -fp_pc_factor_mat_solver_type cusparse >>> -mat_cusparse_storage_format ell -vec_type cuda' >>> to dm/impls/plex/tutorials/ex10.c >>> >>> The first stage, SNES solve, actually looks OK here. Maybe. >>> >>> Thanks, >>> >>> 10:01 mark/feature-xgc-interface-rebase *= ~/petsc$ make -f gmakefile >>> test search='dm_impls_plex_tutorials-ex10_0' >>> /usr/bin/python /ccs/home/adams/petsc/config/gmakegentest.py >>> --petsc-dir=/ccs/home/adams/petsc --petsc-arch=arch-summit-opt64-gnu-cuda >>> --testdir=./arch-summit-opt64-gnu-cuda/tests >>> Using MAKEFLAGS: search=dm_impls_plex_tutorials-ex10_0 >>> CC >>> arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10.o >>> CLINKER >>> arch-summit-opt64-gnu-cuda/tests/dm/impls/plex/tutorials/ex10 >>> TEST >>> arch-summit-opt64-gnu-cuda/tests/counts/dm_impls_plex_tutorials-ex10_0.counts >>> ok dm_impls_plex_tutorials-ex10_0 >>> not ok diff-dm_impls_plex_tutorials-ex10_0 # Error code: 1 >>> # 14,16c14,16 >>> # < 0 SNES Function norm 6.184233768573e-04 >>> # < 1 SNES Function norm 1.467479466750e-08 >>> # < 2 SNES Function norm 7.863111141350e-12 >>> # --- >>> # > 0 SNES Function norm 6.184233768572e-04 >>> # > 1 SNES Function norm 1.467479466739e-08 >>> # > 2 SNES Function norm 7.863102870090e-12 >>> # 18,31c18,256 >>> # < 0 SNES Function norm 6.182952107532e-04 >>> # < 1 SNES Function norm 7.336382211149e-09 >>> # < 2 SNES Function norm 1.566979901443e-11 >>> # < Nonlinear fp_ solve converged due to >>> CONVERGED_FNORM_RELATIVE iterations 2 >>> # < 0 SNES Function norm 6.183592738545e-04 >>> # < 1 SNES Function norm 7.337681407420e-09 >>> # < 2 SNES Function norm 1.408823933908e-11 >>> # < Nonlinear fp_ solve converged due to >>> CONVERGED_FNORM_RELATIVE iterations 2 >>> # < [0] TSAdaptChoose_Basic(): Estimated scaled local truncation >>> error 0.0396569, accepting step of size 1e-06 >>> # < 1 TS dt 1.25e-06 time 1e-06 >>> # < 1) species-0: charge density= -1.6024814608984e+01 >>> z-momentum= 2.0080682964364e-19 energy= 1.2018000284846e+05 >>> # < 1) species-1: charge density= 1.6021676653316e+01 >>> z-momentum= 1.4964483981137e-17 energy= 1.2017223215083e+05 >>> # < 1) species-2: charge density= 2.8838441139703e-03 >>> z-momentum= -1.1062018110807e-23 energy= 1.2019641370376e-03 >>> # < 1) Total: charge density= -2.5411155383649e-04, >>> momentum= 1.5165279748763e-17, energy= 2.4035223620125e+05 (m_i[0]/m_e = >>> 3670.94, 140 cells), 1 sub threads >>> # --- >>> # > 0 SNES Function norm 6.182952107531e-04 >>> # > 1 SNES Function norm 6.181600164904e-04 >>> # > 2 SNES Function norm 6.180249471739e-04 >>> # > 3 SNES Function norm 6.178899987549e-04 >>> >>> >>>> I was planning to reorganize the factor code in AIJCUSPARSE in the next >>>> days. >>>> >>>> kl-18967:petsc zampins$ git grep "solver_type cusparse" >>>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>>> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >>>> cusparse* -mat_cusparse_storage_format ell -vec_type cuda -pc_type ilu >>>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>>> ${DATAFILESPATH}/matrices/shallow_water1 -mat_type seqaijcusparse >>>> -pc_factor_mat_*solver_type cusparse* -mat_cusparse_storage_format hyb >>>> -vec_type cuda -ksp_type cg -pc_type icc >>>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>>> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >>>> cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type >>>> bicg -pc_type ilu >>>> src/ksp/ksp/examples/tests/ex43.c: args: -f >>>> ${DATAFILESPATH}/matrices/cfd.2.10 -mat_type seqaijcusparse -pc_factor_mat_*solver_type >>>> cusparse* -mat_cusparse_storage_format csr -vec_type cuda -ksp_type >>>> bicg -pc_type ilu -pc_factor_mat_ordering_type nd >>>> src/ksp/ksp/examples/tutorials/ex46.c: args: -dm_mat_type >>>> aijcusparse -dm_vec_type cuda -random_exact_sol -pc_type ilu -pc_factor_mat_*solver_type >>>> cusparse* >>>> src/ksp/ksp/examples/tutorials/ex59.c: args: -subdomain_mat_type >>>> aijcusparse -physical_pc_bddc_dirichlet_pc_factor_mat_*solver_type >>>> cusparse* >>>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>>> -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu >>>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>>> -vec_type cuda -sub_ksp_type preonly -sub_pc_type ilu >>>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>>> -vec_type cuda >>>> src/ksp/ksp/examples/tutorials/ex7.c: args: -ksp_monitor_short >>>> -mat_type aijcusparse -sub_pc_factor_mat_*solver_type cusparse* >>>> -vec_type cuda >>>> src/ksp/ksp/examples/tutorials/ex71.c: args: -pde_type Poisson >>>> -cells 7,9,8 -dim 3 -ksp_view -pc_bddc_coarse_redundant_pc_type svd >>>> -ksp_error_if_not_converged -pc_bddc_dirichlet_pc_type cholesky >>>> -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* >>>> -pc_bddc_dirichlet_pc_factor_mat_ordering_type nd -pc_bddc_neumann_pc_type >>>> cholesky -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* >>>> -pc_bddc_neumann_pc_factor_mat_ordering_type nd -matis_localmat_type >>>> aijcusparse >>>> src/ksp/ksp/examples/tutorials/ex72.c: args: -f0 >>>> ${DATAFILESPATH}/matrices/medium -ksp_monitor_short -ksp_view -mat_view >>>> ascii::ascii_info -mat_type aijcusparse -pc_factor_mat_*solver_type >>>> cusparse* -pc_type ilu -vec_type cuda >>>> src/snes/examples/tutorials/ex12.c: args: -matis_localmat_type >>>> aijcusparse -pc_bddc_dirichlet_pc_factor_mat_*solver_type cusparse* >>>> -pc_bddc_neumann_pc_factor_mat_*solver_type cusparse* >>>> >>>> On Apr 15, 2020, at 2:20 PM, Mark Adams wrote: >>>> >>>> I tried using a serial direct solver in cusparse and got bad numerics: >>>> >>>> -vector_type cuda -mat_type aijcusparse -pc_factor_mat_solver_type >>>> cusparse >>>> >>>> Before I start debugging this I wanted to see if there are any known >>>> issues that I should be aware of. >>>> >>>> Thanks, >>>> >>>> >>>> >> >> -- >> Stefano >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at karlrupp.net Wed Apr 15 21:43:11 2020 From: me at karlrupp.net (Karl Rupp) Date: Thu, 16 Apr 2020 04:43:11 +0200 Subject: [petsc-users] dyld: Symbol not found: _MatCreate_MPIAIJViennaCL In-Reply-To: References: <2E1654F2-55C4-4F8C-A06E-EBE74758E386@icloud.com> Message-ID: <55d618f4-3376-24f0-02d9-be22c4373a9b@karlrupp.net> Hi, yes, Satish is right, this build is a CPU-build. Add `--with-opencl=1` :-) Best regards, Karli On 4/16/20 12:31 AM, Satish Balay wrote: > From prior e-mail - you wanted to use AMD GPU on OSX. This build below is CPU build - not for GPU. [Karl can confirm] > > I think OSX has OpenCL installed by default [perhaps via xcode?] - so you might just need the additional configure option: --with-opencl=1 > > Satish > > On Thu, 16 Apr 2020, huabel via petsc-users wrote: > >> Hi Satish, that patch is good, thank you! >> >> >>> On Apr 15, 2020, at 23:58, Satish Balay wrote: >>> >>>> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --prefix=/Users/fire/opt/petsc313 --with-zlib --with-viennacl=1 --with-viennacl-dir=/Users/fire/opt/viennacl >>> >>> I guess you are running viennacl (opencl) on CPU. >>> >>> please try the attached patch. >>> >>> cd petsc >>> patch -Np1 < viennacl.patch >>> >>> Or use branch balay/viennacl-cpu-check/maint in petsc repo >>> >>> Satish >>> >>> On Wed, 15 Apr 2020, huabel via petsc-users wrote: >>> >>>> Dear Users, >>>> >>>> I?m try to use petsc3.13 with ViennaCL , when I try to run src/vec/vec/tutorials/ex1.c, I get next error, thanks. >>>> >>>> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL >>>> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> Expected in: flat namespace >>>> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> [1] 22602 abort ./ex1 >>>> >>>> ? tutorials git:(master) ? ./ex1 -vec_type viennacl -mat_type aijviennacl >>>> dyld: Symbol not found: _MatCreate_MPIAIJViennaCL >>>> Referenced from: /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> Expected in: flat namespace >>>> in /Users/fire/opt/petsc313/lib/libpetsc.3.13.dylib >>>> [1] 23268 abort ./ex1 -vec_type viennacl -mat_type aijviennacl >>>> >>>> >>>> >>>> Thanks >>>> Abel Hu >>>> >>> From sam.guo at cd-adapco.com Thu Apr 16 12:40:58 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Thu, 16 Apr 2020 10:40:58 -0700 Subject: [petsc-users] Xwindow dependency Message-ID: Dear PETSc dev team, Is it possible to configure PETSc not to depend on Xwindow stuff on linux? Thanks, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Apr 16 12:47:01 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 16 Apr 2020 12:47:01 -0500 (CDT) Subject: [petsc-users] Xwindow dependency In-Reply-To: References: Message-ID: ./configure --with-x=0 Satish On Thu, 16 Apr 2020, Sam Guo wrote: > Dear PETSc dev team, > Is it possible to configure PETSc not to depend on Xwindow stuff on > linux? > > Thanks, > Sam > From sam.guo at cd-adapco.com Thu Apr 16 13:59:22 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Thu, 16 Apr 2020 11:59:22 -0700 Subject: [petsc-users] Xwindow dependency In-Reply-To: References: Message-ID: Thanks! On Thu, Apr 16, 2020 at 11:22 AM Satish Balay wrote: > ./configure --with-x=0 > > Satish > > On Thu, 16 Apr 2020, Sam Guo wrote: > > > Dear PETSc dev team, > > Is it possible to configure PETSc not to depend on Xwindow stuff on > > linux? > > > > Thanks, > > Sam > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Thu Apr 16 21:06:27 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 16 Apr 2020 21:06:27 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> Message-ID: Randy, Up to now I could not reproduce your error, even with the biggest mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 While I continue doing test, you can try other options. It looks you want to duplicate a vector to subcomms. I don't think you need the two lines: call AOApplicationToPetsc(aoParent,nis,ind1,ierr) call AOApplicationToPetsc(aoSub,nis,ind2,ierr) In addition, you can use simpler and more memory-efficient index sets. There is a petsc example for this task, see case 3 in https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c BTW, it is good to use petsc master so we are on the same page. --Junchao Zhang On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie wrote: > Hi Junchao, > > So I was able to create a small test code that duplicates the issue we > have been having, and it is attached to this email in a zip file. > Included is the test.F90 code, the commands to duplicate crash and to > duplicate a successful run, output errors, and our petsc configuration. > > Our findings to date include: > > The error is reproducible in a very short time with this script > It is related to nproc*nsubs and (although to a less extent) to DM grid > size > It happens regardless of MPI implementation (mpich, intel mpi 2018, 2019, > openmpi) or compiler (gfortran/gcc , intel 2018) > No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to slightly > increase the limit, but still fails on the full machine set. > Nothing looks interesting on valgrind > > Our initial tests were carried out on an Azure cluster, but we also tested > on our smaller cluster, and we found the following: > > Works: > $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test > -nsubs 80 -nx 100 -ny 100 -nz 100 > > Crashes (this works on Azure) > $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test > -nsubs 80 -nx 100 -ny 100 -nz 100 > > So it looks like it may also be related to the physical number of nodes as > well. > > In any case, even with 2560 processes on 192 cores the memory does not go > above 3.5 Gbyes so you don?t need a huge cluster to test. > > Thanks, > > Randy M. > > > > On Apr 14, 2020, at 12:23 PM, Junchao Zhang > wrote: > > There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I > doubted it was the problem. Even if users configure petsc with 64-bit > indices, we use PetscMPIInt in MPI calls. So it is not a problem. > Try -vecscatter_type mpi1 to restore to the original VecScatter > implementation. If the problem still remains, could you provide a test > example for me to debug? > > --Junchao Zhang > > > On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie > wrote: > >> Hi Junchao, >> >> We have tried your two suggestions but the problem remains. >> And the problem seems to be on the MPI_Isend line 117 in >> PetscGatherMessageLengths and not MPI_AllReduce. >> >> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the >> problem must be elsewhere and not MPI. >> >> Give that this is a 64 bit indices build of PETSc, is there some possible >> incompatibility between PETSc and MPI calls? >> >> We are open to any other possible suggestions to try as other than >> valgrind on thousands of processes we seem to have run out of ideas. >> >> Thanks, Randy M. >> >> On Apr 13, 2020, at 8:54 AM, Junchao Zhang >> wrote: >> >> >> --Junchao Zhang >> >> >> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang >> wrote: >> >>> Randy, >>> Someone reported similar problem before. It turned out an Intel MPI >>> MPI_Allreduce bug. A workaround is setting the environment variable >>> I_MPI_ADJUST_ALLREDUCE=1.arr >>> >> Correct: I_MPI_ADJUST_ALLREDUCE=1 >> >>> But you mentioned mpich also had the error. So maybe the problem is >>> not the same. So let's try the workaround first. If it doesn't work, add >>> another petsc option -build_twosided allreduce, which is a workaround for >>> Intel MPI_Ibarrier bugs we met. >>> Thanks. >>> --Junchao Zhang >>> >>> >>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie >>> wrote: >>> >>>> Dear PETSc users, >>>> >>>> We are trying to understand an issue that has come up in running our >>>> code on a large cloud cluster with a large number of processes and subcomms. >>>> This is code that we use daily on multiple clusters without problems, >>>> and that runs valgrind clean for small test problems. >>>> >>>> The run generates the following messages, but doesn?t crash, just seems >>>> to hang with all processes continuing to show activity: >>>> >>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>> >>>> >>>> Looking at line 117 in PetscGatherMessageLengths we find the offending >>>> statement is the MPI_Isend: >>>> >>>> >>>> /* Post the Isends with the message length-info */ >>>> for (i=0,j=0; i>>> if (ilengths[i]) { >>>> ierr = >>>> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>> j++; >>>> } >>>> } >>>> >>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the >>>> same problem. >>>> >>>> We suspect there is some limit being set on this cloud cluster on the >>>> number of file connections or something, but we don?t know. >>>> >>>> Anyone have any ideas? We are sort of grasping for straws at this point. >>>> >>>> Thanks, Randy M. >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Thu Apr 16 23:13:13 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 16 Apr 2020 23:13:13 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> Message-ID: Randy, I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also found the error went away with petsc-3.13. However, I have not figured out what is the bug and which commit fixed it :). So at your side, it is better to use the latest petsc. --Junchao Zhang On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang wrote: > Randy, > Up to now I could not reproduce your error, even with the biggest mpirun > -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 > While I continue doing test, you can try other options. It looks you > want to duplicate a vector to subcomms. I don't think you need the two > lines: > > call AOApplicationToPetsc(aoParent,nis,ind1,ierr) > call AOApplicationToPetsc(aoSub,nis,ind2,ierr) > > In addition, you can use simpler and more memory-efficient index sets. > There is a petsc example for this task, see case 3 in > https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c > BTW, it is good to use petsc master so we are on the same page. > --Junchao Zhang > > > On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie > wrote: > >> Hi Junchao, >> >> So I was able to create a small test code that duplicates the issue we >> have been having, and it is attached to this email in a zip file. >> Included is the test.F90 code, the commands to duplicate crash and to >> duplicate a successful run, output errors, and our petsc configuration. >> >> Our findings to date include: >> >> The error is reproducible in a very short time with this script >> It is related to nproc*nsubs and (although to a less extent) to DM grid >> size >> It happens regardless of MPI implementation (mpich, intel mpi 2018, 2019, >> openmpi) or compiler (gfortran/gcc , intel 2018) >> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to >> slightly increase the limit, but still fails on the full machine set. >> Nothing looks interesting on valgrind >> >> Our initial tests were carried out on an Azure cluster, but we also >> tested on our smaller cluster, and we found the following: >> >> Works: >> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test >> -nsubs 80 -nx 100 -ny 100 -nz 100 >> >> Crashes (this works on Azure) >> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test >> -nsubs 80 -nx 100 -ny 100 -nz 100 >> >> So it looks like it may also be related to the physical number of nodes >> as well. >> >> In any case, even with 2560 processes on 192 cores the memory does not go >> above 3.5 Gbyes so you don?t need a huge cluster to test. >> >> Thanks, >> >> Randy M. >> >> >> >> On Apr 14, 2020, at 12:23 PM, Junchao Zhang >> wrote: >> >> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I >> doubted it was the problem. Even if users configure petsc with 64-bit >> indices, we use PetscMPIInt in MPI calls. So it is not a problem. >> Try -vecscatter_type mpi1 to restore to the original VecScatter >> implementation. If the problem still remains, could you provide a test >> example for me to debug? >> >> --Junchao Zhang >> >> >> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie >> wrote: >> >>> Hi Junchao, >>> >>> We have tried your two suggestions but the problem remains. >>> And the problem seems to be on the MPI_Isend line 117 in >>> PetscGatherMessageLengths and not MPI_AllReduce. >>> >>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the >>> problem must be elsewhere and not MPI. >>> >>> Give that this is a 64 bit indices build of PETSc, is there some >>> possible incompatibility between PETSc and MPI calls? >>> >>> We are open to any other possible suggestions to try as other than >>> valgrind on thousands of processes we seem to have run out of ideas. >>> >>> Thanks, Randy M. >>> >>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang >>> wrote: >>> >>> >>> --Junchao Zhang >>> >>> >>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang >>> wrote: >>> >>>> Randy, >>>> Someone reported similar problem before. It turned out an Intel MPI >>>> MPI_Allreduce bug. A workaround is setting the environment variable >>>> I_MPI_ADJUST_ALLREDUCE=1.arr >>>> >>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>> >>>> But you mentioned mpich also had the error. So maybe the problem is >>>> not the same. So let's try the workaround first. If it doesn't work, add >>>> another petsc option -build_twosided allreduce, which is a workaround for >>>> Intel MPI_Ibarrier bugs we met. >>>> Thanks. >>>> --Junchao Zhang >>>> >>>> >>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie >>>> wrote: >>>> >>>>> Dear PETSc users, >>>>> >>>>> We are trying to understand an issue that has come up in running our >>>>> code on a large cloud cluster with a large number of processes and subcomms. >>>>> This is code that we use daily on multiple clusters without problems, >>>>> and that runs valgrind clean for small test problems. >>>>> >>>>> The run generates the following messages, but doesn?t crash, just >>>>> seems to hang with all processes continuing to show activity: >>>>> >>>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>>> >>>>> >>>>> Looking at line 117 in PetscGatherMessageLengths we find the offending >>>>> statement is the MPI_Isend: >>>>> >>>>> >>>>> /* Post the Isends with the message length-info */ >>>>> for (i=0,j=0; i>>>> if (ilengths[i]) { >>>>> ierr = >>>>> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>>> j++; >>>>> } >>>>> } >>>>> >>>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving >>>>> the same problem. >>>>> >>>>> We suspect there is some limit being set on this cloud cluster on the >>>>> number of file connections or something, but we don?t know. >>>>> >>>>> Anyone have any ideas? We are sort of grasping for straws at this >>>>> point. >>>>> >>>>> Thanks, Randy M. >>>>> >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Fri Apr 17 01:21:59 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Fri, 17 Apr 2020 03:21:59 -0300 Subject: [petsc-users] Error in configure for --download-=mydir Message-ID: Dear all, For 3.12 and 3.13, I get $ export PETSC_DIR=/home/user1/installers/petsc/petsc-3.13.0 $ export PETSC_ARCH=linux-gnu-opt $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx --prefix=/home/user1/usr/local --with-make-np=10 --with-shared-libraries --download-fblaslapack=/home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz --download-mumps=/home/user1/installers/petsc/v5.1.2-p2.tar.gz --download-scalapack=/home/user1/installers/petsc/scalapack-2.0.2.tgz --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 -march=native -mtune=native' =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== Trying to download file:///home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz for FBLASLAPACK =============================================================================== ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Error during download/extract/detection of FBLASLAPACK: Could not locate downloaded package FBLASLAPACK in /home/user1/installers/petsc/petsc-3.13.0/linux-gnu-opt/externalpackages ******************************************************************************* Up to 3.11, these commands worked fine. Am I doing anything wrong? Thanks a lot! -------------- next part -------------- An HTML attachment was scrubbed... URL: From hu.ds.abel at icloud.com Fri Apr 17 02:43:28 2020 From: hu.ds.abel at icloud.com (huabel) Date: Fri, 17 Apr 2020 15:43:28 +0800 Subject: [petsc-users] error: too few arguments to function call (PetscOptionsHasName) Message-ID: <8ECABAA5-9DA4-4DD0-BDAB-040AAFDEF452@icloud.com> Dear PETSc users, I?m learn some base for PETSc , compile file src/benchmarks/PetscMalloc.c , get next error. (Use PETSc 3.13.0) >pwd src/benchmarks >mpicc PetscMalloc.c PetscMalloc.c:53:49: error: too few arguments to function call, expected 4, have 3 ierr = PetscOptionsHasName(NULL,"-malloc",&flg);CHKERRQ(ierr); ~~~~~~~~~~~~~~~~~~~ ^ /usr/local/include/petscoptions.h:18:1: note: 'PetscOptionsHasName' declared here PETSC_EXTERN PetscErrorCode PetscOptionsHasName(PetscOptions,const char[],const char[],PetscBool*); ^ /usr/local/include/petscsys.h:106:24: note: expanded from macro 'PETSC_EXTERN' # define PETSC_EXTERN extern PETSC_VISIBILITY_PUBLIC ^ 1 error generated. Thanks. Abel -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Fri Apr 17 03:10:32 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 17 Apr 2020 10:10:32 +0200 Subject: [petsc-users] error: too few arguments to function call (PetscOptionsHasName) In-Reply-To: <8ECABAA5-9DA4-4DD0-BDAB-040AAFDEF452@icloud.com> References: <8ECABAA5-9DA4-4DD0-BDAB-040AAFDEF452@icloud.com> Message-ID: Old versions of petsc had 3 args for this function, latest version expects 4 (as the compiler error indicates). When in doubt as to what these args are, please refer to the extensive man pages. You can find them all here https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html The page you want for this func is here https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsHasName.html Tip: It is wise to avoid performing a google search of the function name. It can bring you to the man page for an old version of petsc sometimes and this can lead to confusion. Best go directly to the URL above (or access the pages through the petsc web page) to ensure you are looking at the appropriate man pages Thanks Dave On Fri 17. Apr 2020 at 09:43, huabel via petsc-users < petsc-users at mcs.anl.gov> wrote: > Dear PETSc users, > > I?m learn some base for PETSc , compile file src/benchmarks/PetscMalloc.c > , get next error. (Use PETSc 3.13.0) > > >pwd > src/benchmarks > > >mpicc PetscMalloc.c > *PetscMalloc.c:53:49: **error: **too few arguments to function call, > expected 4, have 3* > ierr = PetscOptionsHasName(NULL,"-malloc",&flg);CHKERRQ(ierr); > * ~~~~~~~~~~~~~~~~~~~ ^* > */usr/local/include/petscoptions.h:18:1: note: *'PetscOptionsHasName' > declared here > PETSC_EXTERN PetscErrorCode PetscOptionsHasName(PetscOptions,const > char[],const char[],PetscBool*); > *^* > */usr/local/include/petscsys.h:106:24: note: *expanded from macro > 'PETSC_EXTERN' > # define PETSC_EXTERN extern PETSC_VISIBILITY_PUBLIC > * ^* > 1 error generated. > > Thanks. > Abel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mukkundsunjii at gmail.com Fri Apr 17 04:27:04 2020 From: mukkundsunjii at gmail.com (MUKKUND SUNJII) Date: Fri, 17 Apr 2020 11:27:04 +0200 Subject: [petsc-users] Modification to ex11.c Message-ID: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Greetings, I had been working on ts/tutorials/ex11.c as part of my master?s thesis. In ex11.c, there is a 2D Shallow Water Model. I want to modify the model by adding a source term to the model. This source term would factor in the effects of bathymetry or bed elevation in addition to water height. I know how I will be able to achieve this if just use TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the computation fluxes at the interfaces (therefore, it uses DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) ). Is there a routine that allows me to add a source term for every cell to the RHS function? Thank you in advance! Regards, Mukkund -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Apr 17 06:55:01 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 17 Apr 2020 07:55:01 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: You can write your own DMPlexTSComputeRHSFunctionFVM method like this and give that to the dm. PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) { PetscErrorCode ierr; PetscFunctionBegin; ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); ... PetscFunctionReturn(0); } On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII wrote: > Greetings, > > I had been working on ts/tutorials/ex11.c as part of my master?s thesis. > > In ex11.c, there is a 2D Shallow Water Model. I want to modify the model > by adding a source term to the model. This source term would factor in the > effects of bathymetry or bed elevation in addition to water height. > > I know how I will be able to achieve this if just use TSSetRHSFunction(). > However ex11.c makes use of the Riemann Solver for the computation fluxes > at the interfaces (therefore, it uses DMTSSetRHSFunctionLocal(dm, > DMPlexTSComputeRHSFunctionFVM, user) ). > > Is there a routine that allows me to *add* a source term *for every cell* > to the RHS function? > > Thank you in advance! > > Regards, > > Mukkund > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 17 08:00:22 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Apr 2020 09:00:22 -0400 Subject: [petsc-users] Error in configure for --download-=mydir In-Reply-To: References: Message-ID: You need to send configure.log Thanks, Matt On Fri, Apr 17, 2020 at 2:23 AM wrote: > Dear all, > > For 3.12 and 3.13, I get > > $ export PETSC_DIR=/home/user1/installers/petsc/petsc-3.13.0 > $ export PETSC_ARCH=linux-gnu-opt > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > --prefix=/home/user1/usr/local --with-make-np=10 --with-shared-libraries > --download-fblaslapack=/home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz > --download-mumps=/home/user1/installers/petsc/v5.1.2-p2.tar.gz > --download-scalapack=/home/user1/installers/petsc/scalapack-2.0.2.tgz > --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' > FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 > -march=native -mtune=native' > > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > =============================================================================== > Trying to download > file:///home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz for > FBLASLAPACK > =============================================================================== > > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > Error during download/extract/detection of FBLASLAPACK: > Could not locate downloaded package FBLASLAPACK in > /home/user1/installers/petsc/petsc-3.13.0/linux-gnu-opt/externalpackages > > ******************************************************************************* > > Up to 3.11, these commands worked fine. > > Am I doing anything wrong? > > Thanks a lot! > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 17 08:18:15 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Apr 2020 09:18:15 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: Mark is right, you could just add in a source term for each cell this way. If you really wanted a lower level interface, it would go here I think: https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 What would you want the interface to look like? Thanks, Matt On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: > You can write your own DMPlexTSComputeRHSFunctionFVM method like this and > give that to the dm. > > PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) > { > PetscErrorCode ierr; > PetscFunctionBegin; > ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); > ... > > PetscFunctionReturn(0); > } > > > On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII > wrote: > >> Greetings, >> >> I had been working on ts/tutorials/ex11.c as part of my master?s thesis. >> >> In ex11.c, there is a 2D Shallow Water Model. I want to modify the model >> by adding a source term to the model. This source term would factor in the >> effects of bathymetry or bed elevation in addition to water height. >> >> I know how I will be able to achieve this if just use TSSetRHSFunction(). >> However ex11.c makes use of the Riemann Solver for the computation fluxes >> at the interfaces (therefore, it uses DMTSSetRHSFunctionLocal(dm, >> DMPlexTSComputeRHSFunctionFVM, user) ). >> >> Is there a routine that allows me to *add* a source term *for every cell* >> to the RHS function? >> >> Thank you in advance! >> >> Regards, >> >> Mukkund >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Fri Apr 17 09:26:26 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Fri, 17 Apr 2020 08:26:26 -0600 Subject: [petsc-users] How to set an initial guess for TS In-Reply-To: <87d08ob3tu.fsf@jedbrown.org> References: <87y2rcbnca.fsf@jedbrown.org> <87d08ob3tu.fsf@jedbrown.org> Message-ID: Thanks Jed, I will try and let you know, Thanks again! Fande, On Fri, Apr 3, 2020 at 4:29 PM Jed Brown wrote: > Oh, you just want an initial guess for SNES? Does it work to pull out the > SNES and SNESSetComputeInitialGuess? > > Fande Kong writes: > > > No. I am working on a transient loosely coupled multiphysics simulation. > > Assume there are two physics problems: problem A and problem B. During > each > > time step, there is a Picard iteration between problem A and problem B. > > During each Picard step, you solve problem A (or B) with the solution > > (U_{n-1}) of the previous time step as the initial condition. In the > Picard > > solve stage, I know the solution (\bar{U}_{n}) of the current time step > but > > from the previous Picard iteration. Use \bar{U}_{n}) instead of U_{n-1} > as > > the initial guess for SNES will have a better convergence for me. > > > > Thanks, > > > > Fande, > > > > > > On Fri, Apr 3, 2020 at 1:10 PM Jed Brown wrote: > > > >> This sounds like you're talking about a starting procedure for a DAE (or > >> near-singular ODE)? > >> > >> Fande Kong writes: > >> > >> > Hi All, > >> > > >> > TSSetSolution will set an initial condition for the current TSSolve(). > >> What > >> > should I do if I want to set an initial guess for the current solution > >> that > >> > is different from the initial condition? The initial guess is > supposed > >> to > >> > be really close to the current solution, and then will accelerate my > >> solver. > >> > > >> > In other words, TSSetSolution will set "U_{n-1}", and now we call > TSSolve > >> > to figure out "U_{n}". If I know something about "U_{n}", and I want > to > >> set > >> > "\bar{U}_{n}" as the initial guess of "U_{n}" when computing "U_{n}". > >> > > >> > > >> > Thanks, > >> > > >> > Fande, > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Fri Apr 17 09:27:44 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Fri, 17 Apr 2020 08:27:44 -0600 Subject: [petsc-users] AIJ vs BAIJ when using ILU factorization In-Reply-To: References: Message-ID: Thanks, Hong, I will try the code with bs=1, and report back to you. Fande, On Tue, Mar 31, 2020 at 9:51 PM Zhang, Hong wrote: > Fande, > Checking aij.result: > Mat Object: () 1 MPI processes > type: seqaij > rows=25816, cols=25816, bs=4 > total: nonzeros=1297664, allocated nonzeros=1297664 > total number of mallocs used during MatSetValues calls=0 > using I-node routines: found 6454 nodes, limit used is 5 > > i.e., it uses bs=4 with I-node. The implementation of MatSolve() is > similar to baij with bs=4. What happens if you try aij with > '-matload_block_size 1 -mat_no_inode true'? > Hong > > ------------------------------ > *From:* petsc-users on behalf of Fande > Kong > *Sent:* Monday, March 30, 2020 12:25 PM > *To:* PETSc users list > *Subject:* [petsc-users] AIJ vs BAIJ when using ILU factorization > > Hi All, > > There is a system of equations arising from the discretization of 3D > incompressible Navier-Stoke equations using a finite element method. 4 > unknowns are placed on each mesh point, and then there is a 4x4 saddle > point block on each mesh vertex. I was thinking to solve the linear > equations using an incomplete LU factorization (that will be eventually > used as a subdomain solver for ASM). > > Right now, I am trying to study the ILU performance using AIJ and BAIJ, > respectively. From my understanding, BAIJ should give me better results > since it inverses the 4x4 blocks exactly, while AIJ does not. However, I > found that both BAIJ and AIJ gave me identical results in terms of the > number of iterations. Was that just coincident? Or in theory, they are > just identical. I understand the runtimes may be different because BAIJ > has a better data locality. > > > Please see the attached files for the results and solver configuration. > > > Thanks, > > Fande, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Apr 17 09:38:50 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 17 Apr 2020 10:38:50 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: Matt, that kink seems messed up. On Fri, Apr 17, 2020 at 9:18 AM Matthew Knepley wrote: > Mark is right, you could just add in a source term for each cell this way. > > If you really wanted a lower level interface, it would go here I think: > > > https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 > > What would you want the interface to look like? > > Thanks, > > Matt > > On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: > >> You can write your own DMPlexTSComputeRHSFunctionFVM method like this and >> give that to the dm. >> >> PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) >> { >> PetscErrorCode ierr; >> PetscFunctionBegin; >> ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); >> ... >> >> PetscFunctionReturn(0); >> } >> >> >> On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII >> wrote: >> >>> Greetings, >>> >>> I had been working on ts/tutorials/ex11.c as part of my master?s thesis. >>> >>> In ex11.c, there is a 2D Shallow Water Model. I want to modify the model >>> by adding a source term to the model. This source term would factor in the >>> effects of bathymetry or bed elevation in addition to water height. >>> >>> I know how I will be able to achieve this if just use >>> TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the >>> computation fluxes at the interfaces (therefore, it uses >>> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) ). >>> >>> Is there a routine that allows me to *add* a source term *for every >>> cell* to the RHS function? >>> >>> Thank you in advance! >>> >>> Regards, >>> >>> Mukkund >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 17 09:41:04 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Apr 2020 10:41:04 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: Tried it again and works for me. Matt On Fri, Apr 17, 2020 at 10:39 AM Mark Adams wrote: > Matt, that kink seems messed up. > > On Fri, Apr 17, 2020 at 9:18 AM Matthew Knepley wrote: > >> Mark is right, you could just add in a source term for each cell this way. >> >> If you really wanted a lower level interface, it would go here I think: >> >> >> https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 >> >> What would you want the interface to look like? >> >> Thanks, >> >> Matt >> >> On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: >> >>> You can write your own DMPlexTSComputeRHSFunctionFVM method like this >>> and give that to the dm. >>> >>> PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) >>> { >>> PetscErrorCode ierr; >>> PetscFunctionBegin; >>> ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); >>> ... >>> >>> PetscFunctionReturn(0); >>> } >>> >>> >>> On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII >>> wrote: >>> >>>> Greetings, >>>> >>>> I had been working on ts/tutorials/ex11.c as part of my master?s >>>> thesis. >>>> >>>> In ex11.c, there is a 2D Shallow Water Model. I want to modify the >>>> model by adding a source term to the model. This source term would factor >>>> in the effects of bathymetry or bed elevation in addition to water height. >>>> >>>> I know how I will be able to achieve this if just use >>>> TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the >>>> computation fluxes at the interfaces (therefore, it uses >>>> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) ). >>>> >>>> Is there a routine that allows me to *add* a source term *for every >>>> cell* to the RHS function? >>>> >>>> Thank you in advance! >>>> >>>> Regards, >>>> >>>> Mukkund >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Fri Apr 17 09:46:41 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 17 Apr 2020 15:46:41 +0100 Subject: [petsc-users] error: too few arguments to function call (PetscOptionsHasName) In-Reply-To: References: <8ECABAA5-9DA4-4DD0-BDAB-040AAFDEF452@icloud.com> Message-ID: Please always use "reply-all" so that your messages go to the list. This is standard mailing list etiquette. It is important to preserve threading for people who find this discussion later and so that we do not waste our time re-answering the same questions that have already been answered in private side-conversations. You'll likely get an answer faster that way too. On Fri, 17 Apr 2020 at 09:20, huabel wrote: > I have checked that manual, I mean why a new version release include old > versions of petsc api, why not update them all to new version? > I understand now. I thought that the code you couldn't compile was something you wrote, however I now see it is actually living in the PETSc src tree. I also note that PLogEvent.c also fails to compile for the same reason. The fact the API change was not propagated throughout these two files in src/benchmarks (PetscMalloc.c and PLogEvent.c) is an oversight. I am surprised that this did not get caught as: (i) API changes are usually applied via smart scripting, (ii) I imagined that the regression testing would have picked this up issue. These files were also broken in v 3.12. Thanks for the bug report. > > > On Apr 17, 2020, at 16:10, Dave May wrote: > > Old versions of petsc had 3 args for this function, latest version expects > 4 (as the compiler error indicates). > > When in doubt as to what these args are, please refer to the extensive man > pages. You can find them all here > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html > > > The page you want for this func is here > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsHasName.html > > > Tip: It is wise to avoid performing a google search of the function name. > It can bring you to the man page for an old version of petsc sometimes and > this can lead to confusion. Best go directly to the URL above (or access > the pages through the petsc web page) to ensure you are looking at the > appropriate man pages > > > Thanks > Dave > > > > On Fri 17. Apr 2020 at 09:43, huabel via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Dear PETSc users, >> >> I?m learn some base for PETSc , compile file src/benchmarks/PetscMalloc.c >> , get next error. (Use PETSc 3.13.0) >> >> >pwd >> src/benchmarks >> >> >mpicc PetscMalloc.c >> *PetscMalloc.c:53:49: **error: **too few arguments to function call, >> expected 4, have 3* >> ierr = PetscOptionsHasName(NULL,"-malloc",&flg);CHKERRQ(ierr); >> * ~~~~~~~~~~~~~~~~~~~ ^* >> */usr/local/include/petscoptions.h:18:1: note: *'PetscOptionsHasName' >> declared here >> PETSC_EXTERN PetscErrorCode PetscOptionsHasName(PetscOptions,const >> char[],const char[],PetscBool*); >> *^* >> */usr/local/include/petscsys.h:106:24: note: *expanded from macro >> 'PETSC_EXTERN' >> # define PETSC_EXTERN extern PETSC_VISIBILITY_PUBLIC >> * ^* >> 1 error generated. >> >> Thanks. >> Abel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Apr 17 09:57:47 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 17 Apr 2020 10:57:47 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: Line 1528 of dmplexsnes.c ? On Fri, Apr 17, 2020 at 10:41 AM Matthew Knepley wrote: > Tried it again and works for me. > > Matt > > On Fri, Apr 17, 2020 at 10:39 AM Mark Adams wrote: > >> Matt, that kink seems messed up. >> >> On Fri, Apr 17, 2020 at 9:18 AM Matthew Knepley >> wrote: >> >>> Mark is right, you could just add in a source term for each cell this >>> way. >>> >>> If you really wanted a lower level interface, it would go here I think: >>> >>> >>> https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 >>> >>> What would you want the interface to look like? >>> >>> Thanks, >>> >>> Matt >>> >>> On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: >>> >>>> You can write your own DMPlexTSComputeRHSFunctionFVM method like this >>>> and give that to the dm. >>>> >>>> PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) >>>> { >>>> PetscErrorCode ierr; >>>> PetscFunctionBegin; >>>> ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); >>>> ... >>>> >>>> PetscFunctionReturn(0); >>>> } >>>> >>>> >>>> On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII >>>> wrote: >>>> >>>>> Greetings, >>>>> >>>>> I had been working on ts/tutorials/ex11.c as part of my master?s >>>>> thesis. >>>>> >>>>> In ex11.c, there is a 2D Shallow Water Model. I want to modify the >>>>> model by adding a source term to the model. This source term would factor >>>>> in the effects of bathymetry or bed elevation in addition to water height. >>>>> >>>>> I know how I will be able to achieve this if just use >>>>> TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the >>>>> computation fluxes at the interfaces (therefore, it uses >>>>> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) ). >>>>> >>>>> Is there a routine that allows me to *add* a source term *for every >>>>> cell* to the RHS function? >>>>> >>>>> Thank you in advance! >>>>> >>>>> Regards, >>>>> >>>>> Mukkund >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 17 10:04:51 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 17 Apr 2020 10:04:51 -0500 (CDT) Subject: [petsc-users] Error in configure for --download-=mydir In-Reply-To: References: Message-ID: Some package names have changed [when using git repos for both git and tarballs] - so its best to use the tarballs [or git repos] that are appropriate for the current petsc release. balay at sb /home/balay/petsc (maint=) $ ./configure --with-packages-download-dir=$HOME/tmp --download-fblaslapack --download-mumps --download-scalapack =============================================================================== Configuring PETSc to compile on your system =============================================================================== Download the following packages to /home/balay/tmp fblaslapack ['git://https://bitbucket.org/petsc/pkg-fblaslapack', 'https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz'] mumps ['git://https://bitbucket.org/petsc/pkg-mumps.git', 'https://bitbucket.org/petsc/pkg-mumps/get/v5.2.1-p2.tar.gz'] scalapack ['git://https://bitbucket.org/petsc/pkg-scalapack', 'https://bitbucket.org/petsc/pkg-scalapack/get/v2.1.0-p1.tar.gz'] Then run the script again balay at sb /home/balay/petsc (maint=) $ Satish On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > Dear all, > > For 3.12 and 3.13, I get > > $ export PETSC_DIR=/home/user1/installers/petsc/petsc-3.13.0 > $ export PETSC_ARCH=linux-gnu-opt > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > --prefix=/home/user1/usr/local --with-make-np=10 --with-shared-libraries > --download-fblaslapack=/home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz > --download-mumps=/home/user1/installers/petsc/v5.1.2-p2.tar.gz > --download-scalapack=/home/user1/installers/petsc/scalapack-2.0.2.tgz > --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' > FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 > -march=native -mtune=native' > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > =============================================================================== > Trying to download > file:///home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz for > FBLASLAPACK > =============================================================================== > > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > Error during download/extract/detection of FBLASLAPACK: > Could not locate downloaded package FBLASLAPACK in > /home/user1/installers/petsc/petsc-3.13.0/linux-gnu-opt/externalpackages > ******************************************************************************* > > Up to 3.11, these commands worked fine. > > Am I doing anything wrong? > > Thanks a lot! > From knepley at gmail.com Fri Apr 17 10:07:47 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Apr 2020 11:07:47 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: On Fri, Apr 17, 2020 at 10:58 AM Mark Adams wrote: > Line 1528 of dmplexsnes.c ? > Yes. This is where we add in u_t, so it would make sense to also add in the source term here. If it were a function of t, s(t), you would just add vol*s(t). Thanks, Matt > On Fri, Apr 17, 2020 at 10:41 AM Matthew Knepley > wrote: > >> Tried it again and works for me. >> >> Matt >> >> On Fri, Apr 17, 2020 at 10:39 AM Mark Adams wrote: >> >>> Matt, that kink seems messed up. >>> >>> On Fri, Apr 17, 2020 at 9:18 AM Matthew Knepley >>> wrote: >>> >>>> Mark is right, you could just add in a source term for each cell this >>>> way. >>>> >>>> If you really wanted a lower level interface, it would go here I think: >>>> >>>> >>>> https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 >>>> >>>> What would you want the interface to look like? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: >>>> >>>>> You can write your own DMPlexTSComputeRHSFunctionFVM method like this >>>>> and give that to the dm. >>>>> >>>>> PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) >>>>> { >>>>> PetscErrorCode ierr; >>>>> PetscFunctionBegin; >>>>> ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); >>>>> ... >>>>> >>>>> PetscFunctionReturn(0); >>>>> } >>>>> >>>>> >>>>> On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII < >>>>> mukkundsunjii at gmail.com> wrote: >>>>> >>>>>> Greetings, >>>>>> >>>>>> I had been working on ts/tutorials/ex11.c as part of my master?s >>>>>> thesis. >>>>>> >>>>>> In ex11.c, there is a 2D Shallow Water Model. I want to modify the >>>>>> model by adding a source term to the model. This source term would factor >>>>>> in the effects of bathymetry or bed elevation in addition to water height. >>>>>> >>>>>> I know how I will be able to achieve this if just use >>>>>> TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the >>>>>> computation fluxes at the interfaces (therefore, it uses >>>>>> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) ). >>>>>> >>>>>> Is there a routine that allows me to *add* a source term *for every >>>>>> cell* to the RHS function? >>>>>> >>>>>> Thank you in advance! >>>>>> >>>>>> Regards, >>>>>> >>>>>> Mukkund >>>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hu.ds.abel at icloud.com Fri Apr 17 10:09:55 2020 From: hu.ds.abel at icloud.com (huabel) Date: Fri, 17 Apr 2020 23:09:55 +0800 Subject: [petsc-users] error: too few arguments to function call (PetscOptionsHasName) In-Reply-To: References: <8ECABAA5-9DA4-4DD0-BDAB-040AAFDEF452@icloud.com> Message-ID: <33E6408F-0A67-4088-9DE9-2AACFB150872@icloud.com> Thanks. > On Apr 17, 2020, at 22:46, Dave May wrote: > > > Please always use "reply-all" so that your messages go to the list. > This is standard mailing list etiquette. It is important to preserve > threading for people who find this discussion later and so that we do > not waste our time re-answering the same questions that have already > been answered in private side-conversations. You'll likely get an > answer faster that way too. > > > On Fri, 17 Apr 2020 at 09:20, huabel > wrote: > I have checked that manual, I mean why a new version release include old versions of petsc api, why not update them all to new version? > > > I understand now. I thought that the code you couldn't compile was something you wrote, however I now see it is actually living in the PETSc src tree. > I also note that PLogEvent.c also fails to compile for the same reason. > > The fact the API change was not propagated throughout these two files in src/benchmarks (PetscMalloc.c and PLogEvent.c) is an oversight. > > I am surprised that this did not get caught as: > (i) API changes are usually applied via smart scripting, > (ii) I imagined that the regression testing would have picked this up issue. > These files were also broken in v 3.12. > > Thanks for the bug report. > > > > > > > >> On Apr 17, 2020, at 16:10, Dave May > wrote: >> >> Old versions of petsc had 3 args for this function, latest version expects 4 (as the compiler error indicates). >> >> When in doubt as to what these args are, please refer to the extensive man pages. You can find them all here >> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html >> >> The page you want for this func is here >> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsHasName.html >> >> Tip: It is wise to avoid performing a google search of the function name. It can bring you to the man page for an old version of petsc sometimes and this can lead to confusion. Best go directly to the URL above (or access the pages through the petsc web page) to ensure you are looking at the appropriate man pages >> >> >> Thanks >> Dave >> >> >> >> On Fri 17. Apr 2020 at 09:43, huabel via petsc-users > wrote: >> Dear PETSc users, >> >> I?m learn some base for PETSc , compile file src/benchmarks/PetscMalloc.c , get next error. (Use PETSc 3.13.0) >> >> >pwd >> src/benchmarks >> >> >mpicc PetscMalloc.c >> PetscMalloc.c:53:49: error: too few arguments to function call, expected 4, have 3 >> ierr = PetscOptionsHasName(NULL,"-malloc",&flg);CHKERRQ(ierr); >> ~~~~~~~~~~~~~~~~~~~ ^ >> /usr/local/include/petscoptions.h:18:1: note: 'PetscOptionsHasName' declared here >> PETSC_EXTERN PetscErrorCode PetscOptionsHasName(PetscOptions,const char[],const char[],PetscBool*); >> ^ >> /usr/local/include/petscsys.h:106:24: note: expanded from macro 'PETSC_EXTERN' >> # define PETSC_EXTERN extern PETSC_VISIBILITY_PUBLIC >> ^ >> 1 error generated. >> >> Thanks. >> Abel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Fri Apr 17 10:09:56 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Fri, 17 Apr 2020 10:09:56 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> Message-ID: On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang wrote: > Randy, > I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also > found the error went away with petsc-3.13. However, I have not figured out > what is the bug and which commit fixed it :). > So at your side, it is better to use the latest petsc. > I want to add that even with petsc-3.12.4 the error is random. I was only able to reproduce the error once, so I can not claim petsc-3.13 actually fixed it (or, the bug is really in petsc). > --Junchao Zhang > > > On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang > wrote: > >> Randy, >> Up to now I could not reproduce your error, even with the biggest >> mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 >> While I continue doing test, you can try other options. It looks you >> want to duplicate a vector to subcomms. I don't think you need the two >> lines: >> >> call AOApplicationToPetsc(aoParent,nis,ind1,ierr) >> call AOApplicationToPetsc(aoSub,nis,ind2,ierr) >> >> In addition, you can use simpler and more memory-efficient index sets. >> There is a petsc example for this task, see case 3 in >> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c >> BTW, it is good to use petsc master so we are on the same page. >> --Junchao Zhang >> >> >> On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie >> wrote: >> >>> Hi Junchao, >>> >>> So I was able to create a small test code that duplicates the issue we >>> have been having, and it is attached to this email in a zip file. >>> Included is the test.F90 code, the commands to duplicate crash and to >>> duplicate a successful run, output errors, and our petsc configuration. >>> >>> Our findings to date include: >>> >>> The error is reproducible in a very short time with this script >>> It is related to nproc*nsubs and (although to a less extent) to DM grid >>> size >>> It happens regardless of MPI implementation (mpich, intel mpi 2018, >>> 2019, openmpi) or compiler (gfortran/gcc , intel 2018) >>> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to >>> slightly increase the limit, but still fails on the full machine set. >>> Nothing looks interesting on valgrind >>> >>> Our initial tests were carried out on an Azure cluster, but we also >>> tested on our smaller cluster, and we found the following: >>> >>> Works: >>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test >>> -nsubs 80 -nx 100 -ny 100 -nz 100 >>> >>> Crashes (this works on Azure) >>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test >>> -nsubs 80 -nx 100 -ny 100 -nz 100 >>> >>> So it looks like it may also be related to the physical number of nodes >>> as well. >>> >>> In any case, even with 2560 processes on 192 cores the memory does not >>> go above 3.5 Gbyes so you don?t need a huge cluster to test. >>> >>> Thanks, >>> >>> Randy M. >>> >>> >>> >>> On Apr 14, 2020, at 12:23 PM, Junchao Zhang >>> wrote: >>> >>> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I >>> doubted it was the problem. Even if users configure petsc with 64-bit >>> indices, we use PetscMPIInt in MPI calls. So it is not a problem. >>> Try -vecscatter_type mpi1 to restore to the original VecScatter >>> implementation. If the problem still remains, could you provide a test >>> example for me to debug? >>> >>> --Junchao Zhang >>> >>> >>> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie >>> wrote: >>> >>>> Hi Junchao, >>>> >>>> We have tried your two suggestions but the problem remains. >>>> And the problem seems to be on the MPI_Isend line 117 in >>>> PetscGatherMessageLengths and not MPI_AllReduce. >>>> >>>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking >>>> the problem must be elsewhere and not MPI. >>>> >>>> Give that this is a 64 bit indices build of PETSc, is there some >>>> possible incompatibility between PETSc and MPI calls? >>>> >>>> We are open to any other possible suggestions to try as other than >>>> valgrind on thousands of processes we seem to have run out of ideas. >>>> >>>> Thanks, Randy M. >>>> >>>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang >>>> wrote: >>>> >>>> >>>> --Junchao Zhang >>>> >>>> >>>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang >>>> wrote: >>>> >>>>> Randy, >>>>> Someone reported similar problem before. It turned out an Intel MPI >>>>> MPI_Allreduce bug. A workaround is setting the environment variable >>>>> I_MPI_ADJUST_ALLREDUCE=1.arr >>>>> >>>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>>> >>>>> But you mentioned mpich also had the error. So maybe the problem is >>>>> not the same. So let's try the workaround first. If it doesn't work, add >>>>> another petsc option -build_twosided allreduce, which is a workaround for >>>>> Intel MPI_Ibarrier bugs we met. >>>>> Thanks. >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie >>>>> wrote: >>>>> >>>>>> Dear PETSc users, >>>>>> >>>>>> We are trying to understand an issue that has come up in running our >>>>>> code on a large cloud cluster with a large number of processes and subcomms. >>>>>> This is code that we use daily on multiple clusters without problems, >>>>>> and that runs valgrind clean for small test problems. >>>>>> >>>>>> The run generates the following messages, but doesn?t crash, just >>>>>> seems to hang with all processes continuing to show activity: >>>>>> >>>>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>>>> >>>>>> >>>>>> Looking at line 117 in PetscGatherMessageLengths we find the >>>>>> offending statement is the MPI_Isend: >>>>>> >>>>>> >>>>>> /* Post the Isends with the message length-info */ >>>>>> for (i=0,j=0; i>>>>> if (ilengths[i]) { >>>>>> ierr = >>>>>> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>>>> j++; >>>>>> } >>>>>> } >>>>>> >>>>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving >>>>>> the same problem. >>>>>> >>>>>> We suspect there is some limit being set on this cloud cluster on the >>>>>> number of file connections or something, but we don?t know. >>>>>> >>>>>> Anyone have any ideas? We are sort of grasping for straws at this >>>>>> point. >>>>>> >>>>>> Thanks, Randy M. >>>>>> >>>>> >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Fri Apr 17 10:46:57 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 17 Apr 2020 08:46:57 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> Message-ID: Hi Junchao, Thank you for your efforts. We tried petsc-3.13.0 but it made no difference. We think now the issue are with sysctl parameters, and increasing those seemed to have cleared up the problem. This also most likely explains how different clusters had different behaviors with our test code. We are now running our code and will report back once we are sure that there are no further issues. Thanks again for your help. Randy M. > On Apr 17, 2020, at 8:09 AM, Junchao Zhang wrote: > > > > > On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang > wrote: > Randy, > I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also found the error went away with petsc-3.13. However, I have not figured out what is the bug and which commit fixed it :). > So at your side, it is better to use the latest petsc. > I want to add that even with petsc-3.12.4 the error is random. I was only able to reproduce the error once, so I can not claim petsc-3.13 actually fixed it (or, the bug is really in petsc). > > --Junchao Zhang > > > On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang > wrote: > Randy, > Up to now I could not reproduce your error, even with the biggest mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 > While I continue doing test, you can try other options. It looks you want to duplicate a vector to subcomms. I don't think you need the two lines: > call AOApplicationToPetsc(aoParent,nis,ind1,ierr) > call AOApplicationToPetsc(aoSub,nis,ind2,ierr) > In addition, you can use simpler and more memory-efficient index sets. There is a petsc example for this task, see case 3 in https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c > BTW, it is good to use petsc master so we are on the same page. > --Junchao Zhang > > > On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie > wrote: > Hi Junchao, > > So I was able to create a small test code that duplicates the issue we have been having, and it is attached to this email in a zip file. > Included is the test.F90 code, the commands to duplicate crash and to duplicate a successful run, output errors, and our petsc configuration. > > Our findings to date include: > > The error is reproducible in a very short time with this script > It is related to nproc*nsubs and (although to a less extent) to DM grid size > It happens regardless of MPI implementation (mpich, intel mpi 2018, 2019, openmpi) or compiler (gfortran/gcc , intel 2018) > No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to slightly increase the limit, but still fails on the full machine set. > Nothing looks interesting on valgrind > > Our initial tests were carried out on an Azure cluster, but we also tested on our smaller cluster, and we found the following: > > Works: > $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 > > Crashes (this works on Azure) > $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 > > So it looks like it may also be related to the physical number of nodes as well. > > In any case, even with 2560 processes on 192 cores the memory does not go above 3.5 Gbyes so you don?t need a huge cluster to test. > > Thanks, > > Randy M. > > > >> On Apr 14, 2020, at 12:23 PM, Junchao Zhang > wrote: >> >> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I doubted it was the problem. Even if users configure petsc with 64-bit indices, we use PetscMPIInt in MPI calls. So it is not a problem. >> Try -vecscatter_type mpi1 to restore to the original VecScatter implementation. If the problem still remains, could you provide a test example for me to debug? >> >> --Junchao Zhang >> >> >> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie > wrote: >> Hi Junchao, >> >> We have tried your two suggestions but the problem remains. >> And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce. >> >> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the problem must be elsewhere and not MPI. >> >> Give that this is a 64 bit indices build of PETSc, is there some possible incompatibility between PETSc and MPI calls? >> >> We are open to any other possible suggestions to try as other than valgrind on thousands of processes we seem to have run out of ideas. >> >> Thanks, Randy M. >> >>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang > wrote: >>> >>> >>> --Junchao Zhang >>> >>> >>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang > wrote: >>> Randy, >>> Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr >>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>> But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. >>> Thanks. >>> --Junchao Zhang >>> >>> >>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: >>> Dear PETSc users, >>> >>> We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. >>> This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. >>> >>> The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: >>> >>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>> >>> >>> Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: >>> >>> >>> /* Post the Isends with the message length-info */ >>> for (i=0,j=0; i>> if (ilengths[i]) { >>> ierr = MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>> j++; >>> } >>> } >>> >>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the same problem. >>> >>> We suspect there is some limit being set on this cloud cluster on the number of file connections or something, but we don?t know. >>> >>> Anyone have any ideas? We are sort of grasping for straws at this point. >>> >>> Thanks, Randy M. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Apr 17 11:52:24 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 17 Apr 2020 12:52:24 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: Just to be clear, you seem to be suggesting adding an interface for this and/or code to clone? On Fri, Apr 17, 2020 at 11:08 AM Matthew Knepley wrote: > On Fri, Apr 17, 2020 at 10:58 AM Mark Adams wrote: > >> Line 1528 of dmplexsnes.c ? >> > > Yes. This is where we add in u_t, so it would make sense to also add in > the source term here. If it were a function of t, s(t), you would > just add vol*s(t). > > Thanks, > > Matt > > >> On Fri, Apr 17, 2020 at 10:41 AM Matthew Knepley >> wrote: >> >>> Tried it again and works for me. >>> >>> Matt >>> >>> On Fri, Apr 17, 2020 at 10:39 AM Mark Adams wrote: >>> >>>> Matt, that kink seems messed up. >>>> >>>> On Fri, Apr 17, 2020 at 9:18 AM Matthew Knepley >>>> wrote: >>>> >>>>> Mark is right, you could just add in a source term for each cell this >>>>> way. >>>>> >>>>> If you really wanted a lower level interface, it would go here I think: >>>>> >>>>> >>>>> https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 >>>>> >>>>> What would you want the interface to look like? >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: >>>>> >>>>>> You can write your own DMPlexTSComputeRHSFunctionFVM method like this >>>>>> and give that to the dm. >>>>>> >>>>>> PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) >>>>>> { >>>>>> PetscErrorCode ierr; >>>>>> PetscFunctionBegin; >>>>>> ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); >>>>>> ... >>>>>> >>>>>> PetscFunctionReturn(0); >>>>>> } >>>>>> >>>>>> >>>>>> On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII < >>>>>> mukkundsunjii at gmail.com> wrote: >>>>>> >>>>>>> Greetings, >>>>>>> >>>>>>> I had been working on ts/tutorials/ex11.c as part of my master?s >>>>>>> thesis. >>>>>>> >>>>>>> In ex11.c, there is a 2D Shallow Water Model. I want to modify the >>>>>>> model by adding a source term to the model. This source term would factor >>>>>>> in the effects of bathymetry or bed elevation in addition to water height. >>>>>>> >>>>>>> I know how I will be able to achieve this if just use >>>>>>> TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the >>>>>>> computation fluxes at the interfaces (therefore, it uses >>>>>>> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) ). >>>>>>> >>>>>>> Is there a routine that allows me to *add* a source term *for every >>>>>>> cell* to the RHS function? >>>>>>> >>>>>>> Thank you in advance! >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> Mukkund >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 17 12:02:27 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Apr 2020 13:02:27 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: On Fri, Apr 17, 2020 at 12:52 PM Mark Adams wrote: > Just to be clear, you seem to be suggesting adding an interface for this > and/or code to clone? > Adding an interface. Thanks, Matt > On Fri, Apr 17, 2020 at 11:08 AM Matthew Knepley > wrote: > >> On Fri, Apr 17, 2020 at 10:58 AM Mark Adams wrote: >> >>> Line 1528 of dmplexsnes.c ? >>> >> >> Yes. This is where we add in u_t, so it would make sense to also add in >> the source term here. If it were a function of t, s(t), you would >> just add vol*s(t). >> >> Thanks, >> >> Matt >> >> >>> On Fri, Apr 17, 2020 at 10:41 AM Matthew Knepley >>> wrote: >>> >>>> Tried it again and works for me. >>>> >>>> Matt >>>> >>>> On Fri, Apr 17, 2020 at 10:39 AM Mark Adams wrote: >>>> >>>>> Matt, that kink seems messed up. >>>>> >>>>> On Fri, Apr 17, 2020 at 9:18 AM Matthew Knepley >>>>> wrote: >>>>> >>>>>> Mark is right, you could just add in a source term for each cell this >>>>>> way. >>>>>> >>>>>> If you really wanted a lower level interface, it would go here I >>>>>> think: >>>>>> >>>>>> >>>>>> https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 >>>>>> >>>>>> What would you want the interface to look like? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: >>>>>> >>>>>>> You can write your own DMPlexTSComputeRHSFunctionFVM method like >>>>>>> this and give that to the dm. >>>>>>> >>>>>>> PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) >>>>>>> { >>>>>>> PetscErrorCode ierr; >>>>>>> PetscFunctionBegin; >>>>>>> ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); >>>>>>> ... >>>>>>> >>>>>>> PetscFunctionReturn(0); >>>>>>> } >>>>>>> >>>>>>> >>>>>>> On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII < >>>>>>> mukkundsunjii at gmail.com> wrote: >>>>>>> >>>>>>>> Greetings, >>>>>>>> >>>>>>>> I had been working on ts/tutorials/ex11.c as part of my master?s >>>>>>>> thesis. >>>>>>>> >>>>>>>> In ex11.c, there is a 2D Shallow Water Model. I want to modify the >>>>>>>> model by adding a source term to the model. This source term would factor >>>>>>>> in the effects of bathymetry or bed elevation in addition to water height. >>>>>>>> >>>>>>>> I know how I will be able to achieve this if just use >>>>>>>> TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the >>>>>>>> computation fluxes at the interfaces (therefore, it uses >>>>>>>> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) >>>>>>>> ). >>>>>>>> >>>>>>>> Is there a routine that allows me to *add* a source term *for >>>>>>>> every cell* to the RHS function? >>>>>>>> >>>>>>>> Thank you in advance! >>>>>>>> >>>>>>>> Regards, >>>>>>>> >>>>>>>> Mukkund >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Fri Apr 17 15:17:29 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Fri, 17 Apr 2020 17:17:29 -0300 Subject: [petsc-users] Error in configure for --download-=mydir In-Reply-To: References: Message-ID: The point is that I cannot go through a proxy to simply use --download- --download- ... So I would: 1. Run configure with --download- ... 2. Wait until configure complains about the missing package and get the URL 3. wget the package 4. Add the local file name to --download-=... Repeat one package after the other until I have all the packages locally. I guess this should work. But then I get the error reported, starting with 3.12. I am attaching configure.log Thanks! Santiago On Fri, Apr 17, 2020 at 12:04 PM Satish Balay wrote: > Some package names have changed [when using git repos for both git and > tarballs] - so its best to use the tarballs [or git repos] that are > appropriate for the current petsc release. > > balay at sb /home/balay/petsc (maint=) > $ ./configure --with-packages-download-dir=$HOME/tmp > --download-fblaslapack --download-mumps --download-scalapack > > =============================================================================== > Configuring PETSc to compile on your system > > > =============================================================================== > Download the following packages to /home/balay/tmp > > fblaslapack ['git://https://bitbucket.org/petsc/pkg-fblaslapack', ' > https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz'] > mumps ['git://https://bitbucket.org/petsc/pkg-mumps.git', ' > https://bitbucket.org/petsc/pkg-mumps/get/v5.2.1-p2.tar.gz'] > scalapack ['git://https://bitbucket.org/petsc/pkg-scalapack', ' > https://bitbucket.org/petsc/pkg-scalapack/get/v2.1.0-p1.tar.gz'] > > Then run the script again > > balay at sb /home/balay/petsc (maint=) > $ > > Satish > On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > > > Dear all, > > > > For 3.12 and 3.13, I get > > > > $ export PETSC_DIR=/home/user1/installers/petsc/petsc-3.13.0 > > $ export PETSC_ARCH=linux-gnu-opt > > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > > --prefix=/home/user1/usr/local --with-make-np=10 --with-shared-libraries > > > --download-fblaslapack=/home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz > > --download-mumps=/home/user1/installers/petsc/v5.1.2-p2.tar.gz > > --download-scalapack=/home/user1/installers/petsc/scalapack-2.0.2.tgz > > --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' > > FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 > > -march=native -mtune=native' > > > =============================================================================== > > Configuring PETSc to compile on your system > > > =============================================================================== > > > =============================================================================== > > Trying to download > > file:///home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz for > > FBLASLAPACK > > > =============================================================================== > > > > > > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > > ------------------------------------------------------------------------------- > > Error during download/extract/detection of FBLASLAPACK: > > Could not locate downloaded package FBLASLAPACK in > > /home/user1/installers/petsc/petsc-3.13.0/linux-gnu-opt/externalpackages > > > ******************************************************************************* > > > > Up to 3.11, these commands worked fine. > > > > Am I doing anything wrong? > > > > Thanks a lot! > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 790710 bytes Desc: not available URL: From mfadams at lbl.gov Fri Apr 17 15:27:23 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 17 Apr 2020 16:27:23 -0400 Subject: [petsc-users] Modification to ex11.c In-Reply-To: References: <628AC8BB-BBEA-48EB-9D8C-2D418444E17A@gmail.com> Message-ID: I appreciate the ambition and this is a pretty basic thing, but I would suggest not doing anything until more people ask about it. And with this new example there will be code for people to clone. THe interface is not clear to me anyway. I could imagine point functions would work for some and others have the data in some for and its more natural to stuff the data in yourself. Just my 2c, Mark On Fri, Apr 17, 2020 at 1:04 PM Matthew Knepley wrote: > On Fri, Apr 17, 2020 at 12:52 PM Mark Adams wrote: > >> Just to be clear, you seem to be suggesting adding an interface for this >> and/or code to clone? >> > > Adding an interface. > > Thanks, > > Matt > > >> On Fri, Apr 17, 2020 at 11:08 AM Matthew Knepley >> wrote: >> >>> On Fri, Apr 17, 2020 at 10:58 AM Mark Adams wrote: >>> >>>> Line 1528 of dmplexsnes.c ? >>>> >>> >>> Yes. This is where we add in u_t, so it would make sense to also add in >>> the source term here. If it were a function of t, s(t), you would >>> just add vol*s(t). >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> On Fri, Apr 17, 2020 at 10:41 AM Matthew Knepley >>>> wrote: >>>> >>>>> Tried it again and works for me. >>>>> >>>>> Matt >>>>> >>>>> On Fri, Apr 17, 2020 at 10:39 AM Mark Adams wrote: >>>>> >>>>>> Matt, that kink seems messed up. >>>>>> >>>>>> On Fri, Apr 17, 2020 at 9:18 AM Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> Mark is right, you could just add in a source term for each cell >>>>>>> this way. >>>>>>> >>>>>>> If you really wanted a lower level interface, it would go here I >>>>>>> think: >>>>>>> >>>>>>> >>>>>>> https://gitlab.com/petsc/petsc/-/blob/master/src/snes/utils/dmplexsnes.c#L1528 >>>>>>> >>>>>>> What would you want the interface to look like? >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote: >>>>>>> >>>>>>>> You can write your own DMPlexTSComputeRHSFunctionFVM method like >>>>>>>> this and give that to the dm. >>>>>>>> >>>>>>>> PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user) >>>>>>>> { >>>>>>>> PetscErrorCode ierr; >>>>>>>> PetscFunctionBegin; >>>>>>>> ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr); >>>>>>>> ... >>>>>>>> >>>>>>>> PetscFunctionReturn(0); >>>>>>>> } >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Apr 17, 2020 at 5:28 AM MUKKUND SUNJII < >>>>>>>> mukkundsunjii at gmail.com> wrote: >>>>>>>> >>>>>>>>> Greetings, >>>>>>>>> >>>>>>>>> I had been working on ts/tutorials/ex11.c as part of my master?s >>>>>>>>> thesis. >>>>>>>>> >>>>>>>>> In ex11.c, there is a 2D Shallow Water Model. I want to modify the >>>>>>>>> model by adding a source term to the model. This source term would factor >>>>>>>>> in the effects of bathymetry or bed elevation in addition to water height. >>>>>>>>> >>>>>>>>> I know how I will be able to achieve this if just use >>>>>>>>> TSSetRHSFunction(). However ex11.c makes use of the Riemann Solver for the >>>>>>>>> computation fluxes at the interfaces (therefore, it uses >>>>>>>>> DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, user) >>>>>>>>> ). >>>>>>>>> >>>>>>>>> Is there a routine that allows me to *add* a source term *for >>>>>>>>> every cell* to the RHS function? >>>>>>>>> >>>>>>>>> Thank you in advance! >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> >>>>>>>>> Mukkund >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 17 15:28:34 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 17 Apr 2020 15:28:34 -0500 (CDT) Subject: [petsc-users] Error in configure for --download-=mydir In-Reply-To: References: Message-ID: You might want to check my previous reply more closely. On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > The point is that I cannot go through a proxy to simply use > --download- --download- ... > > So I would: > 1. Run configure with --download- ... > 2. Wait until configure complains about the missing package and get the URL > 3. wget the package Configure complaint [in step 2 above] would have listed either or both: git://https://bitbucket.org/petsc/pkg-fblaslapack https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz However - you are attempting to use: --download-fblaslapack=/home/santiago/Documents/installers/petsc/fblaslapack-3.4.2.tar.gz This is not the same tarball as what configure has instructed you to use. Satish > 4. Add the local file name to --download-=... > Repeat one package after the other until I have all the packages locally. > > I guess this should work. But then I get the error reported, starting with > 3.12. > > I am attaching configure.log > > Thanks! > Santiago > > > > On Fri, Apr 17, 2020 at 12:04 PM Satish Balay wrote: > > > Some package names have changed [when using git repos for both git and > > tarballs] - so its best to use the tarballs [or git repos] that are > > appropriate for the current petsc release. > > > > balay at sb /home/balay/petsc (maint=) > > $ ./configure --with-packages-download-dir=$HOME/tmp > > --download-fblaslapack --download-mumps --download-scalapack > > > > =============================================================================== > > Configuring PETSc to compile on your system > > > > > > =============================================================================== > > Download the following packages to /home/balay/tmp > > > > fblaslapack ['git://https://bitbucket.org/petsc/pkg-fblaslapack', ' > > https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz'] > > mumps ['git://https://bitbucket.org/petsc/pkg-mumps.git', ' > > https://bitbucket.org/petsc/pkg-mumps/get/v5.2.1-p2.tar.gz'] > > scalapack ['git://https://bitbucket.org/petsc/pkg-scalapack', ' > > https://bitbucket.org/petsc/pkg-scalapack/get/v2.1.0-p1.tar.gz'] > > > > Then run the script again > > > > balay at sb /home/balay/petsc (maint=) > > $ > > > > Satish > > On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > > > > > Dear all, > > > > > > For 3.12 and 3.13, I get > > > > > > $ export PETSC_DIR=/home/user1/installers/petsc/petsc-3.13.0 > > > $ export PETSC_ARCH=linux-gnu-opt > > > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > > > --prefix=/home/user1/usr/local --with-make-np=10 --with-shared-libraries > > > > > --download-fblaslapack=/home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz > > > --download-mumps=/home/user1/installers/petsc/v5.1.2-p2.tar.gz > > > --download-scalapack=/home/user1/installers/petsc/scalapack-2.0.2.tgz > > > --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' > > > FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 > > > -march=native -mtune=native' > > > > > =============================================================================== > > > Configuring PETSc to compile on your system > > > > > =============================================================================== > > > > > =============================================================================== > > > Trying to download > > > file:///home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz for > > > FBLASLAPACK > > > > > =============================================================================== > > > > > > > > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > details): > > > > > ------------------------------------------------------------------------------- > > > Error during download/extract/detection of FBLASLAPACK: > > > Could not locate downloaded package FBLASLAPACK in > > > /home/user1/installers/petsc/petsc-3.13.0/linux-gnu-opt/externalpackages > > > > > ******************************************************************************* > > > > > > Up to 3.11, these commands worked fine. > > > > > > Am I doing anything wrong? > > > > > > Thanks a lot! > > > > > > > > From sserebrinsky at gmail.com Fri Apr 17 17:24:42 2020 From: sserebrinsky at gmail.com (Santiago Serebrinsky) Date: Fri, 17 Apr 2020 19:24:42 -0300 Subject: [petsc-users] Error in configure for --download-=mydir In-Reply-To: References: Message-ID: Satish, Thank you, it worked. Regards, Santiago On Fri, Apr 17, 2020 at 5:28 PM Satish Balay wrote: > You might want to check my previous reply more closely. > > On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > > > The point is that I cannot go through a proxy to simply use > > --download- --download- ... > > > > So I would: > > 1. Run configure with --download- ... > > 2. Wait until configure complains about the missing package and get the > URL > > 3. wget the package > > Configure complaint [in step 2 above] would have listed either or both: > > git://https://bitbucket.org/petsc/pkg-fblaslapack > https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz > > However - you are attempting to use: > --download-fblaslapack=/home/santiago/Documents/installers/petsc/fblaslapack-3.4.2.tar.gz > > This is not the same tarball as what configure has instructed you to use. > > Satish > > > 4. Add the local file name to --download-=... > > Repeat one package after the other until I have all the packages locally. > > > > I guess this should work. But then I get the error reported, starting > with > > 3.12. > > > > I am attaching configure.log > > > > Thanks! > > Santiago > > > > > > > > On Fri, Apr 17, 2020 at 12:04 PM Satish Balay wrote: > > > > > Some package names have changed [when using git repos for both git and > > > tarballs] - so its best to use the tarballs [or git repos] that are > > > appropriate for the current petsc release. > > > > > > balay at sb /home/balay/petsc (maint=) > > > $ ./configure --with-packages-download-dir=$HOME/tmp > > > --download-fblaslapack --download-mumps --download-scalapack > > > > > > > =============================================================================== > > > Configuring PETSc to compile on your system > > > > > > > > > > =============================================================================== > > > Download the following packages to /home/balay/tmp > > > > > > fblaslapack ['git://https://bitbucket.org/petsc/pkg-fblaslapack', ' > > > https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz'] > > > mumps ['git://https://bitbucket.org/petsc/pkg-mumps.git', ' > > > https://bitbucket.org/petsc/pkg-mumps/get/v5.2.1-p2.tar.gz'] > > > scalapack ['git://https://bitbucket.org/petsc/pkg-scalapack', ' > > > https://bitbucket.org/petsc/pkg-scalapack/get/v2.1.0-p1.tar.gz'] > > > > > > Then run the script again > > > > > > balay at sb /home/balay/petsc (maint=) > > > $ > > > > > > Satish > > > On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > > > > > > > Dear all, > > > > > > > > For 3.12 and 3.13, I get > > > > > > > > $ export PETSC_DIR=/home/user1/installers/petsc/petsc-3.13.0 > > > > $ export PETSC_ARCH=linux-gnu-opt > > > > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > > > > --prefix=/home/user1/usr/local --with-make-np=10 > --with-shared-libraries > > > > > > > > --download-fblaslapack=/home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz > > > > --download-mumps=/home/user1/installers/petsc/v5.1.2-p2.tar.gz > > > > --download-scalapack=/home/user1/installers/petsc/scalapack-2.0.2.tgz > > > > --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' > > > > FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 > > > > -march=native -mtune=native' > > > > > > > > =============================================================================== > > > > Configuring PETSc to compile on your system > > > > > > > > =============================================================================== > > > > > > > > =============================================================================== > > > > Trying to > download > > > > file:///home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz for > > > > FBLASLAPACK > > > > > > > > =============================================================================== > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > configure.log for > > > > details): > > > > > > > > ------------------------------------------------------------------------------- > > > > Error during download/extract/detection of FBLASLAPACK: > > > > Could not locate downloaded package FBLASLAPACK in > > > > > /home/user1/installers/petsc/petsc-3.13.0/linux-gnu-opt/externalpackages > > > > > > > > ******************************************************************************* > > > > > > > > Up to 3.11, these commands worked fine. > > > > > > > > Am I doing anything wrong? > > > > > > > > Thanks a lot! > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 17 17:38:01 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 17 Apr 2020 17:38:01 -0500 (CDT) Subject: [petsc-users] Error in configure for --download-=mydir In-Reply-To: References: Message-ID: Great! Thanks for the update. Satish On Fri, 17 Apr 2020, Santiago Serebrinsky wrote: > Satish, > Thank you, it worked. > Regards, > Santiago > > > On Fri, Apr 17, 2020 at 5:28 PM Satish Balay wrote: > > > You might want to check my previous reply more closely. > > > > On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > > > > > The point is that I cannot go through a proxy to simply use > > > --download- --download- ... > > > > > > So I would: > > > 1. Run configure with --download- ... > > > 2. Wait until configure complains about the missing package and get the > > URL > > > 3. wget the package > > > > Configure complaint [in step 2 above] would have listed either or both: > > > > git://https://bitbucket.org/petsc/pkg-fblaslapack > > https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz > > > > However - you are attempting to use: > > --download-fblaslapack=/home/santiago/Documents/installers/petsc/fblaslapack-3.4.2.tar.gz > > > > This is not the same tarball as what configure has instructed you to use. > > > > Satish > > > > > 4. Add the local file name to --download-=... > > > Repeat one package after the other until I have all the packages locally. > > > > > > I guess this should work. But then I get the error reported, starting > > with > > > 3.12. > > > > > > I am attaching configure.log > > > > > > Thanks! > > > Santiago > > > > > > > > > > > > On Fri, Apr 17, 2020 at 12:04 PM Satish Balay wrote: > > > > > > > Some package names have changed [when using git repos for both git and > > > > tarballs] - so its best to use the tarballs [or git repos] that are > > > > appropriate for the current petsc release. > > > > > > > > balay at sb /home/balay/petsc (maint=) > > > > $ ./configure --with-packages-download-dir=$HOME/tmp > > > > --download-fblaslapack --download-mumps --download-scalapack > > > > > > > > > > =============================================================================== > > > > Configuring PETSc to compile on your system > > > > > > > > > > > > > > =============================================================================== > > > > Download the following packages to /home/balay/tmp > > > > > > > > fblaslapack ['git://https://bitbucket.org/petsc/pkg-fblaslapack', ' > > > > https://bitbucket.org/petsc/pkg-fblaslapack/get/v3.4.2-p3.tar.gz'] > > > > mumps ['git://https://bitbucket.org/petsc/pkg-mumps.git', ' > > > > https://bitbucket.org/petsc/pkg-mumps/get/v5.2.1-p2.tar.gz'] > > > > scalapack ['git://https://bitbucket.org/petsc/pkg-scalapack', ' > > > > https://bitbucket.org/petsc/pkg-scalapack/get/v2.1.0-p1.tar.gz'] > > > > > > > > Then run the script again > > > > > > > > balay at sb /home/balay/petsc (maint=) > > > > $ > > > > > > > > Satish > > > > On Fri, 17 Apr 2020, san.temporal at gmail.com wrote: > > > > > > > > > Dear all, > > > > > > > > > > For 3.12 and 3.13, I get > > > > > > > > > > $ export PETSC_DIR=/home/user1/installers/petsc/petsc-3.13.0 > > > > > $ export PETSC_ARCH=linux-gnu-opt > > > > > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > > > > > --prefix=/home/user1/usr/local --with-make-np=10 > > --with-shared-libraries > > > > > > > > > > > --download-fblaslapack=/home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz > > > > > --download-mumps=/home/user1/installers/petsc/v5.1.2-p2.tar.gz > > > > > --download-scalapack=/home/user1/installers/petsc/scalapack-2.0.2.tgz > > > > > --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' > > > > > FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 > > > > > -march=native -mtune=native' > > > > > > > > > > > =============================================================================== > > > > > Configuring PETSc to compile on your system > > > > > > > > > > > =============================================================================== > > > > > > > > > > > =============================================================================== > > > > > Trying to > > download > > > > > file:///home/user1/installers/petsc/fblaslapack-3.4.2.tar.gz for > > > > > FBLASLAPACK > > > > > > > > > > > =============================================================================== > > > > > > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > configure.log for > > > > > details): > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > Error during download/extract/detection of FBLASLAPACK: > > > > > Could not locate downloaded package FBLASLAPACK in > > > > > > > /home/user1/installers/petsc/petsc-3.13.0/linux-gnu-opt/externalpackages > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > Up to 3.11, these commands worked fine. > > > > > > > > > > Am I doing anything wrong? > > > > > > > > > > Thanks a lot! > > > > > > > > > > > > > > > > > > > > > From mfadams at lbl.gov Sat Apr 18 09:24:22 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 18 Apr 2020 10:24:22 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: Back to SuperLU + GPUs (adding Sherry) I get this error (appended) running 'check', as I said before. It looks like ex19 is *failing* with CUDA but it is not clear it has anything to do with SuperLU. I can not find these diagnostics that got printed after the error in PETSc or SuperLU. So this is a problem, but moving on to my code (plex/ex11 in mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. I use superlu and GPUs, but they do not seem to be used in SuperLU: ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total GPU - CpuToGpu - - GpuToCpu - GPU Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size Count Size %F --------------------------------------------------------------------------------------------------------------------------------------------------------------- .... MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 0.00e+00 0 0.00e+00 0* No CUDA version. The times are the same and no GPU communication above. So SuperLU does not seem to be using GPUs. ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ .... MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 There are some differences: ex19 use DMDA and I use DMPlex, 'check' is run in my home directory, where files can not be written, and I run my code in the project areas. The timings are different without superlu so I think superlu is being used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type sell) jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 -petscspace_degree 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type superlu -mat_superlu_equil -dm_mat_type sell So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned on in SuperLU. Thoughts? Thanks, Mark 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp check Running check examples to verify correct installation Using PETSC_DIR=/ccs/home/adams/petsc and PETSC_ARCH=arch-summit-opt-gnu-cuda-omp C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes 2c2,39 < Number of SNES iterations = 2 --- *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion `cacheNode != __null' failed.*> [h50n09:102287] *** Process received signal *** > CUDA version: v 10010 > CUDA Devices: > > 0 : Tesla V100-SXM2-16GB 7 0 > Global memory: 16128 mb > Shared memory: 48 kb > Constant memory: 64 kb > Block registers: 65536 > > [h50n09:102287] Signal: Aborted (6) > [h50n09:102287] Associated errno: Unknown error 1072693248 (1072693248) > [h50n09:102287] Signal code: User function (kill, sigsend, abort, etc.) (0) > [h50n09:102287] [ 0] [0x2000000504d8] > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] > [h50n09:102287] [ 3] /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] > [h50n09:102287] [ 4] /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > [h50n09:102287] [ 5] /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > [h50n09:102287] [ 6] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] > [h50n09:102287] [ 7] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] > [h50n09:102287] [ 8] /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] > [h50n09:102287] [10] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] > [h50n09:102287] [11] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] > [h50n09:102287] [12] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] > [h50n09:102287] [13] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] > [h50n09:102287] [14] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] > [h50n09:102287] [15] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] > [h50n09:102287] [16] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] > [h50n09:102287] [17] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] > [h50n09:102287] [18] /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] > [h50n09:102287] [19] ./ex19[0x10001a6c] > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] > [h50n09:102287] [21] /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] > [h50n09:102287] *** End of error message *** > ERROR: One or more process (first noticed rank 0) terminated with signal 6 /ccs/home/adams/petsc/src/snes/tutorials Possible problem with ex19 running with superlu_dist, diffs above #!/usr/bin/env python if __name__ == '__main__': import sys import os sys.path.insert(0, os.path.abspath('config')) import configure configure_options = [ '--with-fc=0', '--COPTFLAGS=-g -O2 -fPIC -fopenmp', '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', '--CUDAOPTFLAGS=-O2 -g', '--with-ssl=0', '--with-batch=0', '--with-cxx=mpicxx', '--with-mpiexec=jsrun -g1', '--with-cuda=1', '--with-cudac=nvcc', '--download-p4est=1', '--download-zlib', '--download-hdf5=1', '--download-metis', '--download-superlu', '--download-superlu_dist', '--with-make-np=16', # '--with-hwloc=0', '--download-parmetis', # '--download-hypre', '--download-triangle', # '--download-amgx', # '--download-fblaslapack', '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack', '--with-cc=mpicc', # '--with-fc=mpif90', '--with-shared-libraries=1', # '--known-mpi-shared-libraries=1', '--with-x=0', '--with-64-bit-indices=0', '--with-debugging=0', 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', '--with-openmp=1', '--with-threadsaftey=1', '--with-log=1' ] configure.petsc_configure(configure_options) On Wed, Apr 15, 2020 at 9:58 PM Satish Balay wrote: > The crash is inside Superlu_DIST - so don't know what to suggest. > > Might have to debug this via debugger and check with Sherry. > > Satish > > On Wed, 15 Apr 2020, Mark Adams wrote: > > > Ah, OK 'check' will test SuperLU. Semi worked: > > > > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make > > PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > check > > Running check examples to verify correct installation > > Using PETSC_DIR=/ccs/home/adams/petsc and > > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI > processes > > 2c2,38 > > < Number of SNES iterations = 2 > > --- > > > CUDA version: v 10010 > > > CUDA Devices: > > > > > > 0 : Tesla V100-SXM2-16GB 7 0 > > > Global memory: 16128 mb > > > Shared memory: 48 kb > > > Constant memory: 64 kb > > > Block registers: 65536 > > > > > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion > > `cacheNode != __null' failed. > > > [h16n07:78357] *** Process received signal *** > > > [h16n07:78357] Signal: Aborted (6) > > > [h16n07:78357] Signal code: (1704218624) > > > [h16n07:78357] [ 0] [0x2000000504d8] > > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] > > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] > > > [h16n07:78357] [ 3] > /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] > > > [h16n07:78357] [ 4] > > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > > [h16n07:78357] [ 5] > > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > > [h16n07:78357] [ 6] > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] > > > [h16n07:78357] [ 7] > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] > > > [h16n07:78357] [ 8] > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] > > > [h16n07:78357] [ 9] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] > > > [h16n07:78357] [10] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] > > > [h16n07:78357] [11] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] > > > [h16n07:78357] [12] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] > > > [h16n07:78357] [13] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] > > > [h16n07:78357] [14] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] > > > [h16n07:78357] [15] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] > > > [h16n07:78357] [16] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] > > > [h16n07:78357] [17] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] > > > [h16n07:78357] [18] > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] > > > [h16n07:78357] [19] ./ex19[0x10002ac8] > > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] > > > [h16n07:78357] [21] > > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] > > > [h16n07:78357] *** End of error message *** > > > ERROR: One or more process (first noticed rank 0) terminated with > signal > > 6 > > /ccs/home/adams/petsc/src/snes/tutorials > > Possible problem with ex19 running with superlu_dist, diffs above > > ========================================= > > > > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay wrote: > > > > > Please send configure.log > > > > > > This is what I get on my linux build: > > > > > > [balay at p1 petsc]$ ./configure > > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 > > > --with-openmp=1 --download-superlu-dist=1 && make && make check > > > > > > Running check examples to verify correct installation > > > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug > > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI > process > > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI > processes > > > 1a2,19 > > > > CUDA version: v 10020 > > > > CUDA Devices: > > > > > > > > 0 : Quadro T2000 7 5 > > > > Global memory: 3911 mb > > > > Shared memory: 48 kb > > > > Constant memory: 64 kb > > > > Block registers: 65536 > > > > > > > > CUDA version: v 10020 > > > > CUDA Devices: > > > > > > > > 0 : Quadro T2000 7 5 > > > > Global memory: 3911 mb > > > > Shared memory: 48 kb > > > > Constant memory: 64 kb > > > > Block registers: 65536 > > > > > > > /home/balay/petsc/src/snes/tutorials > > > Possible problem with ex19 running with superlu_dist, diffs above > > > ========================================= > > > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI > process > > > Completed test examples > > > > > > > > > On Wed, 15 Apr 2020, Mark Adams wrote: > > > > > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay > wrote: > > > > > > > > > The build should work. It should give some verbose info [at > runtime] > > > > > regarding GPUs - from the following code. > > > > > > > > > > > > > > I don't see that and I am running GPUs in my code and have gotten > > > cusparse > > > > LU to run. Should I use '-info :sys:' ? > > > > > > > > > > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > > > > > void DisplayHeader() > > > > > { > > > > > const int kb = 1024; > > > > > const int mb = kb * kb; > > > > > // cout << "NBody.GPU" << endl << "=========" << endl << endl; > > > > > > > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." << > > > > > THRUST_MINOR_VERSION << endl << endl; > > > > > > > > > > int devCount; > > > > > cudaGetDeviceCount(&devCount); > > > > > printf( "CUDA Devices: \n \n"); > > > > > > > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > > > > > > > > > Satish > > > > > > > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > > > > > > > > > > I remember Barry said superlu gpu support is broken. > > > > > > --Junchao Zhang > > > > > > > > > > > > > > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams > wrote: > > > > > > > > > > > > > How does one use SuperLU with GPUs. I don't seem to get any GPU > > > > > > > performance data so I assume GPUs are not getting turned on. > Am I > > > wrong > > > > > > > about that? > > > > > > > > > > > > > > I configure with: > > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC > -fopenmp" > > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC > > > > > -fopenmp" > > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > > > --with-cxx=mpicxx > > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc > > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 > > > --download-metis > > > > > > > --download-superlu --download-superlu_dist --with-make-np=16 > > > > > > > --download-parmetis --download-triangle > > > > > > > > > > > > > > > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 > > > --with-x=0 > > > > > > > --with-64-bit-indices=0 --with-debugging=0 > > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > > > > > > > --with-threadsaftey=1 --with-log=1 > > > > > > > > > > > > > > Thanks, > > > > > > > Mark > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xsli at lbl.gov Sat Apr 18 10:45:35 2020 From: xsli at lbl.gov (Xiaoye S. Li) Date: Sat, 18 Apr 2020 08:45:35 -0700 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: When you install "-download-superlu_dist", that is from 'master' branch? In the error trace, I recognized this: > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ LU+0xc4)[0x20000195aff4] This is to free the L and U data structures at the end of the program. Sherry On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote: > Back to SuperLU + GPUs (adding Sherry) > > I get this error (appended) running 'check', as I said before. It looks > like ex19 is *failing* with CUDA but it is not clear it has anything to > do with SuperLU. I can not find these diagnostics that got printed after > the error in PETSc or SuperLU. > > So this is a problem, but moving on to my code (plex/ex11 in > mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. > I use superlu and GPUs, but they do not seem to be used in SuperLU: > > > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flop > --- Global --- --- Stage ---- Total GPU - CpuToGpu - - > GpuToCpu - GPU > Max Ratio Max Ratio Max Ratio Mess AvgLen > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size > Count Size %F > > --------------------------------------------------------------------------------------------------------------------------------------------------------------- > .... > MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 > 0.00e+00 0 0.00e+00 0* > > No CUDA version. The times are the same and no GPU communication above. So > SuperLU does not seem to be using GPUs. > > > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flop > --- Global --- --- Stage ---- Total > Max Ratio Max Ratio Max Ratio Mess AvgLen > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > .... > MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 > > There are some differences: ex19 use DMDA and I use DMPlex, 'check' is run > in my home directory, where files can not be written, and I run my code in > the project areas. > > The timings are different without superlu so I think superlu is being > used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type > sell) > > jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view hdf5:re33d.h5 > -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 -petscspace_degree > 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 > -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 > -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor > -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason > -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover > -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 > -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 > -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 > -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly > -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 > -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 > .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 > -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: > -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type > superlu -mat_superlu_equil -dm_mat_type sell > > So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned on in > SuperLU. > Thoughts? > > Thanks, > Mark > > 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make > PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > check > Running check examples to verify correct installation > Using PETSC_DIR=/ccs/home/adams/petsc and > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > 2c2,39 > < Number of SNES iterations = 2 > --- > > *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion > `cacheNode != __null' failed.*> [h50n09:102287] *** Process received > signal *** > > CUDA version: v 10010 > > CUDA Devices: > > > > 0 : Tesla V100-SXM2-16GB 7 0 > > Global memory: 16128 mb > > Shared memory: 48 kb > > Constant memory: 64 kb > > Block registers: 65536 > > > > [h50n09:102287] Signal: Aborted (6) > > [h50n09:102287] Associated errno: Unknown error 1072693248 (1072693248) > > [h50n09:102287] Signal code: User function (kill, sigsend, abort, etc.) > (0) > > [h50n09:102287] [ 0] [0x2000000504d8] > > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] > > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] > > [h50n09:102287] [ 3] /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] > > [h50n09:102287] [ 4] > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > [h50n09:102287] [ 5] > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > [h50n09:102287] [ 6] > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] > > [h50n09:102287] [ 7] > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] > > [h50n09:102287] [ 8] > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] > > [h50n09:102287] [ 9] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] > > [h50n09:102287] [10] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] > > [h50n09:102287] [11] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] > > [h50n09:102287] [12] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] > > [h50n09:102287] [13] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] > > [h50n09:102287] [14] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] > > [h50n09:102287] [15] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] > > [h50n09:102287] [16] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] > > [h50n09:102287] [17] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] > > [h50n09:102287] [18] > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] > > [h50n09:102287] [19] ./ex19[0x10001a6c] > > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] > > [h50n09:102287] [21] > /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] > > [h50n09:102287] *** End of error message *** > > ERROR: One or more process (first noticed rank 0) terminated with > signal 6 > /ccs/home/adams/petsc/src/snes/tutorials > Possible problem with ex19 running with superlu_dist, diffs above > > > > > #!/usr/bin/env python > if __name__ == '__main__': > import sys > import os > sys.path.insert(0, os.path.abspath('config')) > import configure > configure_options = [ > '--with-fc=0', > '--COPTFLAGS=-g -O2 -fPIC -fopenmp', > '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', > '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', > '--CUDAOPTFLAGS=-O2 -g', > '--with-ssl=0', > '--with-batch=0', > '--with-cxx=mpicxx', > '--with-mpiexec=jsrun -g1', > '--with-cuda=1', > '--with-cudac=nvcc', > '--download-p4est=1', > '--download-zlib', > '--download-hdf5=1', > '--download-metis', > '--download-superlu', > '--download-superlu_dist', > '--with-make-np=16', > # '--with-hwloc=0', > '--download-parmetis', > # '--download-hypre', > '--download-triangle', > # '--download-amgx', > # '--download-fblaslapack', > '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + > '/lib64 -lblas -llapack', > '--with-cc=mpicc', > # '--with-fc=mpif90', > '--with-shared-libraries=1', > # '--known-mpi-shared-libraries=1', > '--with-x=0', > '--with-64-bit-indices=0', > '--with-debugging=0', > 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', > '--with-openmp=1', > '--with-threadsaftey=1', > '--with-log=1' > ] > configure.petsc_configure(configure_options) > > > > On Wed, Apr 15, 2020 at 9:58 PM Satish Balay wrote: > >> The crash is inside Superlu_DIST - so don't know what to suggest. >> >> Might have to debug this via debugger and check with Sherry. >> >> Satish >> >> On Wed, 15 Apr 2020, Mark Adams wrote: >> >> > Ah, OK 'check' will test SuperLU. Semi worked: >> > >> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make >> > PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >> > check >> > Running check examples to verify correct installation >> > Using PETSC_DIR=/ccs/home/adams/petsc and >> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >> process >> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >> processes >> > 2c2,38 >> > < Number of SNES iterations = 2 >> > --- >> > > CUDA version: v 10010 >> > > CUDA Devices: >> > > >> > > 0 : Tesla V100-SXM2-16GB 7 0 >> > > Global memory: 16128 mb >> > > Shared memory: 48 kb >> > > Constant memory: 64 kb >> > > Block registers: 65536 >> > > >> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion >> > `cacheNode != __null' failed. >> > > [h16n07:78357] *** Process received signal *** >> > > [h16n07:78357] Signal: Aborted (6) >> > > [h16n07:78357] Signal code: (1704218624) >> > > [h16n07:78357] [ 0] [0x2000000504d8] >> > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] >> > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] >> > > [h16n07:78357] [ 3] >> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] >> > > [h16n07:78357] [ 4] >> > >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >> > > [h16n07:78357] [ 5] >> > >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >> > > [h16n07:78357] [ 6] >> > >> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] >> > > [h16n07:78357] [ 7] >> > >> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] >> > > [h16n07:78357] [ 8] >> > >> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] >> > > [h16n07:78357] [ 9] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] >> > > [h16n07:78357] [10] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] >> > > [h16n07:78357] [11] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] >> > > [h16n07:78357] [12] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] >> > > [h16n07:78357] [13] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] >> > > [h16n07:78357] [14] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] >> > > [h16n07:78357] [15] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] >> > > [h16n07:78357] [16] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] >> > > [h16n07:78357] [17] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] >> > > [h16n07:78357] [18] >> > >> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] >> > > [h16n07:78357] [19] ./ex19[0x10002ac8] >> > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] >> > > [h16n07:78357] [21] >> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] >> > > [h16n07:78357] *** End of error message *** >> > > ERROR: One or more process (first noticed rank 0) terminated with >> signal >> > 6 >> > /ccs/home/adams/petsc/src/snes/tutorials >> > Possible problem with ex19 running with superlu_dist, diffs above >> > ========================================= >> > >> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay wrote: >> > >> > > Please send configure.log >> > > >> > > This is what I get on my linux build: >> > > >> > > [balay at p1 petsc]$ ./configure >> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 >> > > --with-openmp=1 --download-superlu-dist=1 && make && make check >> > > >> > > Running check examples to verify correct installation >> > > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug >> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >> process >> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >> processes >> > > 1a2,19 >> > > > CUDA version: v 10020 >> > > > CUDA Devices: >> > > > >> > > > 0 : Quadro T2000 7 5 >> > > > Global memory: 3911 mb >> > > > Shared memory: 48 kb >> > > > Constant memory: 64 kb >> > > > Block registers: 65536 >> > > > >> > > > CUDA version: v 10020 >> > > > CUDA Devices: >> > > > >> > > > 0 : Quadro T2000 7 5 >> > > > Global memory: 3911 mb >> > > > Shared memory: 48 kb >> > > > Constant memory: 64 kb >> > > > Block registers: 65536 >> > > > >> > > /home/balay/petsc/src/snes/tutorials >> > > Possible problem with ex19 running with superlu_dist, diffs above >> > > ========================================= >> > > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI >> process >> > > Completed test examples >> > > >> > > >> > > On Wed, 15 Apr 2020, Mark Adams wrote: >> > > >> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay >> wrote: >> > > > >> > > > > The build should work. It should give some verbose info [at >> runtime] >> > > > > regarding GPUs - from the following code. >> > > > > >> > > > > >> > > > I don't see that and I am running GPUs in my code and have gotten >> > > cusparse >> > > > LU to run. Should I use '-info :sys:' ? >> > > > >> > > > >> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> >> > > > > void DisplayHeader() >> > > > > { >> > > > > const int kb = 1024; >> > > > > const int mb = kb * kb; >> > > > > // cout << "NBody.GPU" << endl << "=========" << endl << endl; >> > > > > >> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); >> > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." >> << >> > > > > THRUST_MINOR_VERSION << endl << endl; >> > > > > >> > > > > int devCount; >> > > > > cudaGetDeviceCount(&devCount); >> > > > > printf( "CUDA Devices: \n \n"); >> > > > > >> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >> > > > > >> > > > > Satish >> > > > > >> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: >> > > > > >> > > > > > I remember Barry said superlu gpu support is broken. >> > > > > > --Junchao Zhang >> > > > > > >> > > > > > >> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams >> wrote: >> > > > > > >> > > > > > > How does one use SuperLU with GPUs. I don't seem to get any >> GPU >> > > > > > > performance data so I assume GPUs are not getting turned on. >> Am I >> > > wrong >> > > > > > > about that? >> > > > > > > >> > > > > > > I configure with: >> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC >> -fopenmp" >> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 >> -fPIC >> > > > > -fopenmp" >> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 >> > > --with-cxx=mpicxx >> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc >> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 >> > > --download-metis >> > > > > > > --download-superlu --download-superlu_dist --with-make-np=16 >> > > > > > > --download-parmetis --download-triangle >> > > > > > > >> > > > > >> > > >> --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 >> > > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 >> > > --with-x=0 >> > > > > > > --with-64-bit-indices=0 --with-debugging=0 >> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 >> > > > > > > --with-threadsaftey=1 --with-log=1 >> > > > > > > >> > > > > > > Thanks, >> > > > > > > Mark >> > > > > > > >> > > > > > >> > > > > >> > > > > >> > > > >> > > >> > > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat Apr 18 13:43:48 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 18 Apr 2020 14:43:48 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: Sherry, I did rebase with master this week: SuperLU: Version: 5.2.1 Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include Library: -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu I see the same thing with a debug build. If anyone is interested in looking at this, I was also able to see that plex/ex10 in my branch, which is a very simple test , also does not crash and also does not seem to use GPUs in SuperLU. On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: > When you install "-download-superlu_dist", that is from 'master' branch? > > In the error trace, I recognized this: > > > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- > summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ > LU+0xc4)[0x20000195aff4] > > This is to free the L and U data structures at the end of the program. > > Sherry > > On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote: > >> Back to SuperLU + GPUs (adding Sherry) >> >> I get this error (appended) running 'check', as I said before. It looks >> like ex19 is *failing* with CUDA but it is not clear it has anything to >> do with SuperLU. I can not find these diagnostics that got printed after >> the error in PETSc or SuperLU. >> >> So this is a problem, but moving on to my code (plex/ex11 in >> mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. >> I use superlu and GPUs, but they do not seem to be used in SuperLU: >> >> >> ------------------------------------------------------------------------------------------------------------------------ >> Event Count Time (sec) Flop >> --- Global --- --- Stage ---- Total GPU - CpuToGpu - - >> GpuToCpu - GPU >> Max Ratio Max Ratio Max Ratio Mess AvgLen >> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size >> Count Size %F >> >> --------------------------------------------------------------------------------------------------------------------------------------------------------------- >> .... >> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 >> 0.00e+00 0 0.00e+00 0* >> >> No CUDA version. The times are the same and no GPU communication above. >> So SuperLU does not seem to be using GPUs. >> >> >> ------------------------------------------------------------------------------------------------------------------------ >> Event Count Time (sec) Flop >> --- Global --- --- Stage ---- Total >> Max Ratio Max Ratio Max Ratio Mess AvgLen >> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >> >> ------------------------------------------------------------------------------------------------------------------------ >> .... >> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 >> >> There are some differences: ex19 use DMDA and I use DMPlex, 'check' is >> run in my home directory, where files can not be written, and I run my code >> in the project areas. >> >> The timings are different without superlu so I think superlu is being >> used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type >> sell) >> >> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view hdf5:re33d.h5 >> -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 -petscspace_degree >> 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 >> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 >> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor >> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason >> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover >> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 >> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 >> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 >> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly >> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 >> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 >> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 >> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: >> -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type >> superlu -mat_superlu_equil -dm_mat_type sell >> >> So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned on >> in SuperLU. >> Thoughts? >> >> Thanks, >> Mark >> >> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make >> PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >> check >> Running check examples to verify correct installation >> Using PETSC_DIR=/ccs/home/adams/petsc and >> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process >> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >> processes >> 2c2,39 >> < Number of SNES iterations = 2 >> --- >> >> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion >> `cacheNode != __null' failed.*> [h50n09:102287] *** Process received >> signal *** >> > CUDA version: v 10010 >> > CUDA Devices: >> > >> > 0 : Tesla V100-SXM2-16GB 7 0 >> > Global memory: 16128 mb >> > Shared memory: 48 kb >> > Constant memory: 64 kb >> > Block registers: 65536 >> > >> > [h50n09:102287] Signal: Aborted (6) >> > [h50n09:102287] Associated errno: Unknown error 1072693248 (1072693248) >> > [h50n09:102287] Signal code: User function (kill, sigsend, abort, etc.) >> (0) >> > [h50n09:102287] [ 0] [0x2000000504d8] >> > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] >> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] >> > [h50n09:102287] [ 3] >> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] >> > [h50n09:102287] [ 4] >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >> > [h50n09:102287] [ 5] >> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >> > [h50n09:102287] [ 6] >> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] >> > [h50n09:102287] [ 7] >> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] >> > [h50n09:102287] [ 8] >> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] >> > [h50n09:102287] [ 9] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] >> > [h50n09:102287] [10] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] >> > [h50n09:102287] [11] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] >> > [h50n09:102287] [12] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] >> > [h50n09:102287] [13] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] >> > [h50n09:102287] [14] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] >> > [h50n09:102287] [15] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] >> > [h50n09:102287] [16] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] >> > [h50n09:102287] [17] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] >> > [h50n09:102287] [18] >> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] >> > [h50n09:102287] [19] ./ex19[0x10001a6c] >> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] >> > [h50n09:102287] [21] >> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] >> > [h50n09:102287] *** End of error message *** >> > ERROR: One or more process (first noticed rank 0) terminated with >> signal 6 >> /ccs/home/adams/petsc/src/snes/tutorials >> Possible problem with ex19 running with superlu_dist, diffs above >> >> >> >> >> #!/usr/bin/env python >> if __name__ == '__main__': >> import sys >> import os >> sys.path.insert(0, os.path.abspath('config')) >> import configure >> configure_options = [ >> '--with-fc=0', >> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', >> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', >> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', >> '--CUDAOPTFLAGS=-O2 -g', >> '--with-ssl=0', >> '--with-batch=0', >> '--with-cxx=mpicxx', >> '--with-mpiexec=jsrun -g1', >> '--with-cuda=1', >> '--with-cudac=nvcc', >> '--download-p4est=1', >> '--download-zlib', >> '--download-hdf5=1', >> '--download-metis', >> '--download-superlu', >> '--download-superlu_dist', >> '--with-make-np=16', >> # '--with-hwloc=0', >> '--download-parmetis', >> # '--download-hypre', >> '--download-triangle', >> # '--download-amgx', >> # '--download-fblaslapack', >> '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + >> '/lib64 -lblas -llapack', >> '--with-cc=mpicc', >> # '--with-fc=mpif90', >> '--with-shared-libraries=1', >> # '--known-mpi-shared-libraries=1', >> '--with-x=0', >> '--with-64-bit-indices=0', >> '--with-debugging=0', >> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', >> '--with-openmp=1', >> '--with-threadsaftey=1', >> '--with-log=1' >> ] >> configure.petsc_configure(configure_options) >> >> >> >> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay wrote: >> >>> The crash is inside Superlu_DIST - so don't know what to suggest. >>> >>> Might have to debug this via debugger and check with Sherry. >>> >>> Satish >>> >>> On Wed, 15 Apr 2020, Mark Adams wrote: >>> >>> > Ah, OK 'check' will test SuperLU. Semi worked: >>> > >>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make >>> > PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>> > check >>> > Running check examples to verify correct installation >>> > Using PETSC_DIR=/ccs/home/adams/petsc and >>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>> process >>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>> processes >>> > 2c2,38 >>> > < Number of SNES iterations = 2 >>> > --- >>> > > CUDA version: v 10010 >>> > > CUDA Devices: >>> > > >>> > > 0 : Tesla V100-SXM2-16GB 7 0 >>> > > Global memory: 16128 mb >>> > > Shared memory: 48 kb >>> > > Constant memory: 64 kb >>> > > Block registers: 65536 >>> > > >>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion >>> > `cacheNode != __null' failed. >>> > > [h16n07:78357] *** Process received signal *** >>> > > [h16n07:78357] Signal: Aborted (6) >>> > > [h16n07:78357] Signal code: (1704218624) >>> > > [h16n07:78357] [ 0] [0x2000000504d8] >>> > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] >>> > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] >>> > > [h16n07:78357] [ 3] >>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] >>> > > [h16n07:78357] [ 4] >>> > >>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >>> > > [h16n07:78357] [ 5] >>> > >>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >>> > > [h16n07:78357] [ 6] >>> > >>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] >>> > > [h16n07:78357] [ 7] >>> > >>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] >>> > > [h16n07:78357] [ 8] >>> > >>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] >>> > > [h16n07:78357] [ 9] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] >>> > > [h16n07:78357] [10] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] >>> > > [h16n07:78357] [11] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] >>> > > [h16n07:78357] [12] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] >>> > > [h16n07:78357] [13] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] >>> > > [h16n07:78357] [14] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] >>> > > [h16n07:78357] [15] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] >>> > > [h16n07:78357] [16] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] >>> > > [h16n07:78357] [17] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] >>> > > [h16n07:78357] [18] >>> > >>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] >>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] >>> > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] >>> > > [h16n07:78357] [21] >>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] >>> > > [h16n07:78357] *** End of error message *** >>> > > ERROR: One or more process (first noticed rank 0) terminated with >>> signal >>> > 6 >>> > /ccs/home/adams/petsc/src/snes/tutorials >>> > Possible problem with ex19 running with superlu_dist, diffs above >>> > ========================================= >>> > >>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay >>> wrote: >>> > >>> > > Please send configure.log >>> > > >>> > > This is what I get on my linux build: >>> > > >>> > > [balay at p1 petsc]$ ./configure >>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 >>> > > --with-openmp=1 --download-superlu-dist=1 && make && make check >>> > > >>> > > Running check examples to verify correct installation >>> > > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug >>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>> process >>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>> processes >>> > > 1a2,19 >>> > > > CUDA version: v 10020 >>> > > > CUDA Devices: >>> > > > >>> > > > 0 : Quadro T2000 7 5 >>> > > > Global memory: 3911 mb >>> > > > Shared memory: 48 kb >>> > > > Constant memory: 64 kb >>> > > > Block registers: 65536 >>> > > > >>> > > > CUDA version: v 10020 >>> > > > CUDA Devices: >>> > > > >>> > > > 0 : Quadro T2000 7 5 >>> > > > Global memory: 3911 mb >>> > > > Shared memory: 48 kb >>> > > > Constant memory: 64 kb >>> > > > Block registers: 65536 >>> > > > >>> > > /home/balay/petsc/src/snes/tutorials >>> > > Possible problem with ex19 running with superlu_dist, diffs above >>> > > ========================================= >>> > > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI >>> process >>> > > Completed test examples >>> > > >>> > > >>> > > On Wed, 15 Apr 2020, Mark Adams wrote: >>> > > >>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay >>> wrote: >>> > > > >>> > > > > The build should work. It should give some verbose info [at >>> runtime] >>> > > > > regarding GPUs - from the following code. >>> > > > > >>> > > > > >>> > > > I don't see that and I am running GPUs in my code and have gotten >>> > > cusparse >>> > > > LU to run. Should I use '-info :sys:' ? >>> > > > >>> > > > >>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> >>> > > > > void DisplayHeader() >>> > > > > { >>> > > > > const int kb = 1024; >>> > > > > const int mb = kb * kb; >>> > > > > // cout << "NBody.GPU" << endl << "=========" << endl << >>> endl; >>> > > > > >>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); >>> > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." >>> << >>> > > > > THRUST_MINOR_VERSION << endl << endl; >>> > > > > >>> > > > > int devCount; >>> > > > > cudaGetDeviceCount(&devCount); >>> > > > > printf( "CUDA Devices: \n \n"); >>> > > > > >>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>> > > > > >>> > > > > Satish >>> > > > > >>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: >>> > > > > >>> > > > > > I remember Barry said superlu gpu support is broken. >>> > > > > > --Junchao Zhang >>> > > > > > >>> > > > > > >>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams >>> wrote: >>> > > > > > >>> > > > > > > How does one use SuperLU with GPUs. I don't seem to get any >>> GPU >>> > > > > > > performance data so I assume GPUs are not getting turned on. >>> Am I >>> > > wrong >>> > > > > > > about that? >>> > > > > > > >>> > > > > > > I configure with: >>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC >>> -fopenmp" >>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 >>> -fPIC >>> > > > > -fopenmp" >>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 >>> > > --with-cxx=mpicxx >>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc >>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 >>> > > --download-metis >>> > > > > > > --download-superlu --download-superlu_dist --with-make-np=16 >>> > > > > > > --download-parmetis --download-triangle >>> > > > > > > >>> > > > > >>> > > >>> --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 >>> > > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 >>> > > --with-x=0 >>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 >>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 >>> > > > > > > --with-threadsaftey=1 --with-log=1 >>> > > > > > > >>> > > > > > > Thanks, >>> > > > > > > Mark >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > > >>> > > > >>> > > >>> > > >>> > >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From xsli at lbl.gov Sat Apr 18 14:05:09 2020 From: xsli at lbl.gov (Xiaoye S. Li) Date: Sat, 18 Apr 2020 12:05:09 -0700 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: Mark, It seems you are talking about serial superlu? There is no GPU support in it. Only superlu_dist has GPU. But I don't know why there is a crash. Sherry On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote: > Sherry, I did rebase with master this week: > > SuperLU: > Version: 5.2.1 > Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > Library: > -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > I see the same thing with a debug build. > > If anyone is interested in looking at this, I was also able to see that > plex/ex10 in my branch, which is a very simple test , also does not crash > and also does not seem to use GPUs in SuperLU. > > > On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: > >> When you install "-download-superlu_dist", that is from 'master' branch? >> >> In the error trace, I recognized this: >> >> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- >> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ >> LU+0xc4)[0x20000195aff4] >> >> This is to free the L and U data structures at the end of the program. >> >> Sherry >> >> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote: >> >>> Back to SuperLU + GPUs (adding Sherry) >>> >>> I get this error (appended) running 'check', as I said before. It looks >>> like ex19 is *failing* with CUDA but it is not clear it has anything to >>> do with SuperLU. I can not find these diagnostics that got printed after >>> the error in PETSc or SuperLU. >>> >>> So this is a problem, but moving on to my code (plex/ex11 in >>> mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. >>> I use superlu and GPUs, but they do not seem to be used in SuperLU: >>> >>> >>> ------------------------------------------------------------------------------------------------------------------------ >>> Event Count Time (sec) Flop >>> --- Global --- --- Stage ---- Total GPU - CpuToGpu - - >>> GpuToCpu - GPU >>> Max Ratio Max Ratio Max Ratio Mess AvgLen >>> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size >>> Count Size %F >>> >>> --------------------------------------------------------------------------------------------------------------------------------------------------------------- >>> .... >>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 >>> 0.00e+00 0 0.00e+00 0* >>> >>> No CUDA version. The times are the same and no GPU communication above. >>> So SuperLU does not seem to be using GPUs. >>> >>> >>> ------------------------------------------------------------------------------------------------------------------------ >>> Event Count Time (sec) Flop >>> --- Global --- --- Stage ---- Total >>> Max Ratio Max Ratio Max Ratio Mess AvgLen >>> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >>> >>> ------------------------------------------------------------------------------------------------------------------------ >>> .... >>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 >>> >>> There are some differences: ex19 use DMDA and I use DMPlex, 'check' is >>> run in my home directory, where files can not be written, and I run my code >>> in the project areas. >>> >>> The timings are different without superlu so I think superlu is being >>> used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type >>> sell) >>> >>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view hdf5:re33d.h5 >>> -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 -petscspace_degree >>> 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 >>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 >>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor >>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason >>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover >>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 >>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 >>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 >>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly >>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 >>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 >>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 >>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: >>> -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type >>> superlu -mat_superlu_equil -dm_mat_type sell >>> >>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned on >>> in SuperLU. >>> Thoughts? >>> >>> Thanks, >>> Mark >>> >>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make >>> PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >>> check >>> Running check examples to verify correct installation >>> Using PETSC_DIR=/ccs/home/adams/petsc and >>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process >>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>> processes >>> 2c2,39 >>> < Number of SNES iterations = 2 >>> --- >>> >>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion >>> `cacheNode != __null' failed.*> [h50n09:102287] *** Process received >>> signal *** >>> > CUDA version: v 10010 >>> > CUDA Devices: >>> > >>> > 0 : Tesla V100-SXM2-16GB 7 0 >>> > Global memory: 16128 mb >>> > Shared memory: 48 kb >>> > Constant memory: 64 kb >>> > Block registers: 65536 >>> > >>> > [h50n09:102287] Signal: Aborted (6) >>> > [h50n09:102287] Associated errno: Unknown error 1072693248 (1072693248) >>> > [h50n09:102287] Signal code: User function (kill, sigsend, abort, >>> etc.) (0) >>> > [h50n09:102287] [ 0] [0x2000000504d8] >>> > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] >>> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] >>> > [h50n09:102287] [ 3] >>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] >>> > [h50n09:102287] [ 4] >>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >>> > [h50n09:102287] [ 5] >>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >>> > [h50n09:102287] [ 6] >>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] >>> > [h50n09:102287] [ 7] >>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] >>> > [h50n09:102287] [ 8] >>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] >>> > [h50n09:102287] [ 9] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] >>> > [h50n09:102287] [10] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] >>> > [h50n09:102287] [11] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] >>> > [h50n09:102287] [12] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] >>> > [h50n09:102287] [13] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] >>> > [h50n09:102287] [14] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] >>> > [h50n09:102287] [15] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] >>> > [h50n09:102287] [16] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] >>> > [h50n09:102287] [17] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] >>> > [h50n09:102287] [18] >>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] >>> > [h50n09:102287] [19] ./ex19[0x10001a6c] >>> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] >>> > [h50n09:102287] [21] >>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] >>> > [h50n09:102287] *** End of error message *** >>> > ERROR: One or more process (first noticed rank 0) terminated with >>> signal 6 >>> /ccs/home/adams/petsc/src/snes/tutorials >>> Possible problem with ex19 running with superlu_dist, diffs above >>> >>> >>> >>> >>> #!/usr/bin/env python >>> if __name__ == '__main__': >>> import sys >>> import os >>> sys.path.insert(0, os.path.abspath('config')) >>> import configure >>> configure_options = [ >>> '--with-fc=0', >>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', >>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', >>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', >>> '--CUDAOPTFLAGS=-O2 -g', >>> '--with-ssl=0', >>> '--with-batch=0', >>> '--with-cxx=mpicxx', >>> '--with-mpiexec=jsrun -g1', >>> '--with-cuda=1', >>> '--with-cudac=nvcc', >>> '--download-p4est=1', >>> '--download-zlib', >>> '--download-hdf5=1', >>> '--download-metis', >>> '--download-superlu', >>> '--download-superlu_dist', >>> '--with-make-np=16', >>> # '--with-hwloc=0', >>> '--download-parmetis', >>> # '--download-hypre', >>> '--download-triangle', >>> # '--download-amgx', >>> # '--download-fblaslapack', >>> '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] + >>> '/lib64 -lblas -llapack', >>> '--with-cc=mpicc', >>> # '--with-fc=mpif90', >>> '--with-shared-libraries=1', >>> # '--known-mpi-shared-libraries=1', >>> '--with-x=0', >>> '--with-64-bit-indices=0', >>> '--with-debugging=0', >>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', >>> '--with-openmp=1', >>> '--with-threadsaftey=1', >>> '--with-log=1' >>> ] >>> configure.petsc_configure(configure_options) >>> >>> >>> >>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay wrote: >>> >>>> The crash is inside Superlu_DIST - so don't know what to suggest. >>>> >>>> Might have to debug this via debugger and check with Sherry. >>>> >>>> Satish >>>> >>>> On Wed, 15 Apr 2020, Mark Adams wrote: >>>> >>>> > Ah, OK 'check' will test SuperLU. Semi worked: >>>> > >>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make >>>> > PETSC_DIR=/ccs/home/adams/petsc >>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>>> > check >>>> > Running check examples to verify correct installation >>>> > Using PETSC_DIR=/ccs/home/adams/petsc and >>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>>> process >>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>>> processes >>>> > 2c2,38 >>>> > < Number of SNES iterations = 2 >>>> > --- >>>> > > CUDA version: v 10010 >>>> > > CUDA Devices: >>>> > > >>>> > > 0 : Tesla V100-SXM2-16GB 7 0 >>>> > > Global memory: 16128 mb >>>> > > Shared memory: 48 kb >>>> > > Constant memory: 64 kb >>>> > > Block registers: 65536 >>>> > > >>>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion >>>> > `cacheNode != __null' failed. >>>> > > [h16n07:78357] *** Process received signal *** >>>> > > [h16n07:78357] Signal: Aborted (6) >>>> > > [h16n07:78357] Signal code: (1704218624) >>>> > > [h16n07:78357] [ 0] [0x2000000504d8] >>>> > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] >>>> > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] >>>> > > [h16n07:78357] [ 3] >>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] >>>> > > [h16n07:78357] [ 4] >>>> > >>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >>>> > > [h16n07:78357] [ 5] >>>> > >>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >>>> > > [h16n07:78357] [ 6] >>>> > >>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] >>>> > > [h16n07:78357] [ 7] >>>> > >>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] >>>> > > [h16n07:78357] [ 8] >>>> > >>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] >>>> > > [h16n07:78357] [ 9] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] >>>> > > [h16n07:78357] [10] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] >>>> > > [h16n07:78357] [11] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] >>>> > > [h16n07:78357] [12] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] >>>> > > [h16n07:78357] [13] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] >>>> > > [h16n07:78357] [14] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] >>>> > > [h16n07:78357] [15] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] >>>> > > [h16n07:78357] [16] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] >>>> > > [h16n07:78357] [17] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] >>>> > > [h16n07:78357] [18] >>>> > >>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] >>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] >>>> > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] >>>> > > [h16n07:78357] [21] >>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] >>>> > > [h16n07:78357] *** End of error message *** >>>> > > ERROR: One or more process (first noticed rank 0) terminated with >>>> signal >>>> > 6 >>>> > /ccs/home/adams/petsc/src/snes/tutorials >>>> > Possible problem with ex19 running with superlu_dist, diffs above >>>> > ========================================= >>>> > >>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay >>>> wrote: >>>> > >>>> > > Please send configure.log >>>> > > >>>> > > This is what I get on my linux build: >>>> > > >>>> > > [balay at p1 petsc]$ ./configure >>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 >>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make check >>>> > > >>>> > > Running check examples to verify correct installation >>>> > > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug >>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>>> process >>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>>> processes >>>> > > 1a2,19 >>>> > > > CUDA version: v 10020 >>>> > > > CUDA Devices: >>>> > > > >>>> > > > 0 : Quadro T2000 7 5 >>>> > > > Global memory: 3911 mb >>>> > > > Shared memory: 48 kb >>>> > > > Constant memory: 64 kb >>>> > > > Block registers: 65536 >>>> > > > >>>> > > > CUDA version: v 10020 >>>> > > > CUDA Devices: >>>> > > > >>>> > > > 0 : Quadro T2000 7 5 >>>> > > > Global memory: 3911 mb >>>> > > > Shared memory: 48 kb >>>> > > > Constant memory: 64 kb >>>> > > > Block registers: 65536 >>>> > > > >>>> > > /home/balay/petsc/src/snes/tutorials >>>> > > Possible problem with ex19 running with superlu_dist, diffs above >>>> > > ========================================= >>>> > > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI >>>> process >>>> > > Completed test examples >>>> > > >>>> > > >>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: >>>> > > >>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay >>>> wrote: >>>> > > > >>>> > > > > The build should work. It should give some verbose info [at >>>> runtime] >>>> > > > > regarding GPUs - from the following code. >>>> > > > > >>>> > > > > >>>> > > > I don't see that and I am running GPUs in my code and have gotten >>>> > > cusparse >>>> > > > LU to run. Should I use '-info :sys:' ? >>>> > > > >>>> > > > >>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> >>>> > > > > void DisplayHeader() >>>> > > > > { >>>> > > > > const int kb = 1024; >>>> > > > > const int mb = kb * kb; >>>> > > > > // cout << "NBody.GPU" << endl << "=========" << endl << >>>> endl; >>>> > > > > >>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); >>>> > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << >>>> "." << >>>> > > > > THRUST_MINOR_VERSION << endl << endl; >>>> > > > > >>>> > > > > int devCount; >>>> > > > > cudaGetDeviceCount(&devCount); >>>> > > > > printf( "CUDA Devices: \n \n"); >>>> > > > > >>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>>> > > > > >>>> > > > > Satish >>>> > > > > >>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: >>>> > > > > >>>> > > > > > I remember Barry said superlu gpu support is broken. >>>> > > > > > --Junchao Zhang >>>> > > > > > >>>> > > > > > >>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams >>>> wrote: >>>> > > > > > >>>> > > > > > > How does one use SuperLU with GPUs. I don't seem to get any >>>> GPU >>>> > > > > > > performance data so I assume GPUs are not getting turned >>>> on. Am I >>>> > > wrong >>>> > > > > > > about that? >>>> > > > > > > >>>> > > > > > > I configure with: >>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC >>>> -fopenmp" >>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 >>>> -fPIC >>>> > > > > -fopenmp" >>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 >>>> > > --with-cxx=mpicxx >>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc >>>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 >>>> > > --download-metis >>>> > > > > > > --download-superlu --download-superlu_dist --with-make-np=16 >>>> > > > > > > --download-parmetis --download-triangle >>>> > > > > > > >>>> > > > > >>>> > > >>>> --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 >>>> > > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 >>>> > > --with-x=0 >>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 >>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 >>>> > > > > > > --with-threadsaftey=1 --with-log=1 >>>> > > > > > > >>>> > > > > > > Thanks, >>>> > > > > > > Mark >>>> > > > > > > >>>> > > > > > >>>> > > > > >>>> > > > > >>>> > > > >>>> > > >>>> > > >>>> > >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sat Apr 18 15:18:28 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sat, 18 Apr 2020 17:18:28 -0300 Subject: [petsc-users] Ignoring PETSC_ARCH for make check? Message-ID: Hi all, I have just successfully compiled 3.13.0. But with install this is what I get $ make PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 PETSC_ARCH=arch-linux2-c-opt install *** Using PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 PETSC_ARCH=arch-linux2-c-opt *** *** Installing PETSc at prefix location: /home/santiago/usr/local *** ==================================== Install complete. Now to check if the libraries are working do (in current directory): make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" check ==================================== /usr/bin/make --no-print-directory -f makefile PETSC_ARCH=arch-linux2-c-opt PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 mpi4py-install petsc4py-install libmesh-install mfem-install slepc-install hpddm-install amrex-install make[2]: Nothing to be done for 'mpi4py-install'. make[2]: Nothing to be done for 'petsc4py-install'. make[2]: Nothing to be done for 'libmesh-install'. make[2]: Nothing to be done for 'mfem-install'. make[2]: Nothing to be done for 'slepc-install'. make[2]: Nothing to be done for 'hpddm-install'. make[2]: Nothing to be done for 'amrex-install'. What is strange to me is that I am instructed to execute a line with PETSC_ARCH=", while my environment has PETSC_ARCH=arch-linux2-c-opt Why is that? PS: The same happened to me with various other compilations I have just tested, with 3.9, 3.10, 3.11, 3.12 PS2: I do not recall seeing this ever before, although I may have missed it/forgotten. Thanks in advance, Santiago -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sat Apr 18 15:43:57 2020 From: jed at jedbrown.org (Jed Brown) Date: Sat, 18 Apr 2020 14:43:57 -0600 Subject: [petsc-users] Ignoring PETSC_ARCH for make check? In-Reply-To: References: Message-ID: <874ktgcynm.fsf@jedbrown.org> It's intentional and been like this for ages. Prefix installs have only PETSC_DIR (just a path, like other packages), and *must not* set PETSC_ARCH. san.temporal at gmail.com writes: > Hi all, > > I have just successfully compiled 3.13.0. But with install this is what I > get > > $ make PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > PETSC_ARCH=arch-linux2-c-opt install > *** Using > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > PETSC_ARCH=arch-linux2-c-opt *** > *** Installing PETSc at prefix location: /home/santiago/usr/local *** > ==================================== > Install complete. > Now to check if the libraries are working do (in current directory): > make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" check > ==================================== > /usr/bin/make --no-print-directory -f makefile > PETSC_ARCH=arch-linux2-c-opt > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > mpi4py-install petsc4py-install libmesh-install mfem-install slepc-install > hpddm-install amrex-install > make[2]: Nothing to be done for 'mpi4py-install'. > make[2]: Nothing to be done for 'petsc4py-install'. > make[2]: Nothing to be done for 'libmesh-install'. > make[2]: Nothing to be done for 'mfem-install'. > make[2]: Nothing to be done for 'slepc-install'. > make[2]: Nothing to be done for 'hpddm-install'. > make[2]: Nothing to be done for 'amrex-install'. > > What is strange to me is that I am instructed to execute a line with > PETSC_ARCH=", while my environment has PETSC_ARCH=arch-linux2-c-opt > Why is that? > > PS: The same happened to me with various other compilations I have just > tested, with 3.9, 3.10, 3.11, 3.12 > > PS2: I do not recall seeing this ever before, although I may have missed > it/forgotten. > > Thanks in advance, > Santiago From mfadams at lbl.gov Sat Apr 18 18:54:08 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 18 Apr 2020 19:54:08 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li wrote: > Mark, > > It seems you are talking about serial superlu? There is no GPU support > in it. Only superlu_dist has GPU. > I am using superlu_dist on one processor. Should that work? > > But I don't know why there is a crash. > > Sherry > > On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote: > >> Sherry, I did rebase with master this week: >> >> SuperLU: >> Version: 5.2.1 >> Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include >> Library: >> -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib >> -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu >> >> I see the same thing with a debug build. >> >> If anyone is interested in looking at this, I was also able to see that >> plex/ex10 in my branch, which is a very simple test , also does not crash >> and also does not seem to use GPUs in SuperLU. >> >> >> On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: >> >>> When you install "-download-superlu_dist", that is from 'master' branch? >>> >>> In the error trace, I recognized this: >>> >>> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- >>> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ >>> LU+0xc4)[0x20000195aff4] >>> >>> This is to free the L and U data structures at the end of the program. >>> >>> Sherry >>> >>> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote: >>> >>>> Back to SuperLU + GPUs (adding Sherry) >>>> >>>> I get this error (appended) running 'check', as I said before. It looks >>>> like ex19 is *failing* with CUDA but it is not clear it has anything >>>> to do with SuperLU. I can not find these diagnostics that got printed after >>>> the error in PETSc or SuperLU. >>>> >>>> So this is a problem, but moving on to my code (plex/ex11 in >>>> mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. >>>> I use superlu and GPUs, but they do not seem to be used in SuperLU: >>>> >>>> >>>> ------------------------------------------------------------------------------------------------------------------------ >>>> Event Count Time (sec) Flop >>>> --- Global --- --- Stage ---- Total GPU - CpuToGpu - - >>>> GpuToCpu - GPU >>>> Max Ratio Max Ratio Max Ratio Mess AvgLen >>>> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size >>>> Count Size %F >>>> >>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------- >>>> .... >>>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 >>>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 >>>> 0.00e+00 0 0.00e+00 0* >>>> >>>> No CUDA version. The times are the same and no GPU communication above. >>>> So SuperLU does not seem to be using GPUs. >>>> >>>> >>>> ------------------------------------------------------------------------------------------------------------------------ >>>> Event Count Time (sec) Flop >>>> --- Global --- --- Stage ---- Total >>>> Max Ratio Max Ratio Max Ratio Mess AvgLen >>>> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >>>> >>>> ------------------------------------------------------------------------------------------------------------------------ >>>> .... >>>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 >>>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 >>>> >>>> There are some differences: ex19 use DMDA and I use DMPlex, 'check' is >>>> run in my home directory, where files can not be written, and I run my code >>>> in the project areas. >>>> >>>> The timings are different without superlu so I think superlu is being >>>> used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type >>>> sell) >>>> >>>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view >>>> hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 >>>> -petscspace_degree 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 >>>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 >>>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor >>>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason >>>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover >>>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 >>>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 >>>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 >>>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly >>>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 >>>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 >>>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 >>>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: >>>> -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type >>>> superlu -mat_superlu_equil -dm_mat_type sell >>>> >>>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned on >>>> in SuperLU. >>>> Thoughts? >>>> >>>> Thanks, >>>> Mark >>>> >>>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make >>>> PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >>>> check >>>> Running check examples to verify correct installation >>>> Using PETSC_DIR=/ccs/home/adams/petsc and >>>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >>>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>>> process >>>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>>> processes >>>> 2c2,39 >>>> < Number of SNES iterations = 2 >>>> --- >>>> >>>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): Assertion >>>> `cacheNode != __null' failed.*> [h50n09:102287] *** Process received >>>> signal *** >>>> > CUDA version: v 10010 >>>> > CUDA Devices: >>>> > >>>> > 0 : Tesla V100-SXM2-16GB 7 0 >>>> > Global memory: 16128 mb >>>> > Shared memory: 48 kb >>>> > Constant memory: 64 kb >>>> > Block registers: 65536 >>>> > >>>> > [h50n09:102287] Signal: Aborted (6) >>>> > [h50n09:102287] Associated errno: Unknown error 1072693248 >>>> (1072693248) >>>> > [h50n09:102287] Signal code: User function (kill, sigsend, abort, >>>> etc.) (0) >>>> > [h50n09:102287] [ 0] [0x2000000504d8] >>>> > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] >>>> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] >>>> > [h50n09:102287] [ 3] >>>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] >>>> > [h50n09:102287] [ 4] >>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >>>> > [h50n09:102287] [ 5] >>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >>>> > [h50n09:102287] [ 6] >>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] >>>> > [h50n09:102287] [ 7] >>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] >>>> > [h50n09:102287] [ 8] >>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] >>>> > [h50n09:102287] [ 9] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] >>>> > [h50n09:102287] [10] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] >>>> > [h50n09:102287] [11] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] >>>> > [h50n09:102287] [12] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] >>>> > [h50n09:102287] [13] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] >>>> > [h50n09:102287] [14] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] >>>> > [h50n09:102287] [15] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] >>>> > [h50n09:102287] [16] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] >>>> > [h50n09:102287] [17] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] >>>> > [h50n09:102287] [18] >>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] >>>> > [h50n09:102287] [19] ./ex19[0x10001a6c] >>>> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] >>>> > [h50n09:102287] [21] >>>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] >>>> > [h50n09:102287] *** End of error message *** >>>> > ERROR: One or more process (first noticed rank 0) terminated with >>>> signal 6 >>>> /ccs/home/adams/petsc/src/snes/tutorials >>>> Possible problem with ex19 running with superlu_dist, diffs above >>>> >>>> >>>> >>>> >>>> #!/usr/bin/env python >>>> if __name__ == '__main__': >>>> import sys >>>> import os >>>> sys.path.insert(0, os.path.abspath('config')) >>>> import configure >>>> configure_options = [ >>>> '--with-fc=0', >>>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', >>>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', >>>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', >>>> '--CUDAOPTFLAGS=-O2 -g', >>>> '--with-ssl=0', >>>> '--with-batch=0', >>>> '--with-cxx=mpicxx', >>>> '--with-mpiexec=jsrun -g1', >>>> '--with-cuda=1', >>>> '--with-cudac=nvcc', >>>> '--download-p4est=1', >>>> '--download-zlib', >>>> '--download-hdf5=1', >>>> '--download-metis', >>>> '--download-superlu', >>>> '--download-superlu_dist', >>>> '--with-make-np=16', >>>> # '--with-hwloc=0', >>>> '--download-parmetis', >>>> # '--download-hypre', >>>> '--download-triangle', >>>> # '--download-amgx', >>>> # '--download-fblaslapack', >>>> '--with-blaslapack-lib=-L' + os.environ['OLCF_NETLIB_LAPACK_ROOT'] >>>> + '/lib64 -lblas -llapack', >>>> '--with-cc=mpicc', >>>> # '--with-fc=mpif90', >>>> '--with-shared-libraries=1', >>>> # '--known-mpi-shared-libraries=1', >>>> '--with-x=0', >>>> '--with-64-bit-indices=0', >>>> '--with-debugging=0', >>>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', >>>> '--with-openmp=1', >>>> '--with-threadsaftey=1', >>>> '--with-log=1' >>>> ] >>>> configure.petsc_configure(configure_options) >>>> >>>> >>>> >>>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay wrote: >>>> >>>>> The crash is inside Superlu_DIST - so don't know what to suggest. >>>>> >>>>> Might have to debug this via debugger and check with Sherry. >>>>> >>>>> Satish >>>>> >>>>> On Wed, 15 Apr 2020, Mark Adams wrote: >>>>> >>>>> > Ah, OK 'check' will test SuperLU. Semi worked: >>>>> > >>>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make >>>>> > PETSC_DIR=/ccs/home/adams/petsc >>>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>>>> > check >>>>> > Running check examples to verify correct installation >>>>> > Using PETSC_DIR=/ccs/home/adams/petsc and >>>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>>>> process >>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>>>> processes >>>>> > 2c2,38 >>>>> > < Number of SNES iterations = 2 >>>>> > --- >>>>> > > CUDA version: v 10010 >>>>> > > CUDA Devices: >>>>> > > >>>>> > > 0 : Tesla V100-SXM2-16GB 7 0 >>>>> > > Global memory: 16128 mb >>>>> > > Shared memory: 48 kb >>>>> > > Constant memory: 64 kb >>>>> > > Block registers: 65536 >>>>> > > >>>>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): >>>>> Assertion >>>>> > `cacheNode != __null' failed. >>>>> > > [h16n07:78357] *** Process received signal *** >>>>> > > [h16n07:78357] Signal: Aborted (6) >>>>> > > [h16n07:78357] Signal code: (1704218624) >>>>> > > [h16n07:78357] [ 0] [0x2000000504d8] >>>>> > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] >>>>> > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] >>>>> > > [h16n07:78357] [ 3] >>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] >>>>> > > [h16n07:78357] [ 4] >>>>> > >>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >>>>> > > [h16n07:78357] [ 5] >>>>> > >>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >>>>> > > [h16n07:78357] [ 6] >>>>> > >>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] >>>>> > > [h16n07:78357] [ 7] >>>>> > >>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] >>>>> > > [h16n07:78357] [ 8] >>>>> > >>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] >>>>> > > [h16n07:78357] [ 9] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] >>>>> > > [h16n07:78357] [10] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] >>>>> > > [h16n07:78357] [11] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] >>>>> > > [h16n07:78357] [12] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] >>>>> > > [h16n07:78357] [13] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] >>>>> > > [h16n07:78357] [14] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] >>>>> > > [h16n07:78357] [15] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] >>>>> > > [h16n07:78357] [16] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] >>>>> > > [h16n07:78357] [17] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] >>>>> > > [h16n07:78357] [18] >>>>> > >>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] >>>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] >>>>> > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] >>>>> > > [h16n07:78357] [21] >>>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] >>>>> > > [h16n07:78357] *** End of error message *** >>>>> > > ERROR: One or more process (first noticed rank 0) terminated with >>>>> signal >>>>> > 6 >>>>> > /ccs/home/adams/petsc/src/snes/tutorials >>>>> > Possible problem with ex19 running with superlu_dist, diffs above >>>>> > ========================================= >>>>> > >>>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay >>>>> wrote: >>>>> > >>>>> > > Please send configure.log >>>>> > > >>>>> > > This is what I get on my linux build: >>>>> > > >>>>> > > [balay at p1 petsc]$ ./configure >>>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 >>>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make check >>>>> > > >>>>> > > Running check examples to verify correct installation >>>>> > > Using PETSC_DIR=/home/balay/petsc and PETSC_ARCH=arch-linux-c-debug >>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>>>> process >>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>>>> processes >>>>> > > 1a2,19 >>>>> > > > CUDA version: v 10020 >>>>> > > > CUDA Devices: >>>>> > > > >>>>> > > > 0 : Quadro T2000 7 5 >>>>> > > > Global memory: 3911 mb >>>>> > > > Shared memory: 48 kb >>>>> > > > Constant memory: 64 kb >>>>> > > > Block registers: 65536 >>>>> > > > >>>>> > > > CUDA version: v 10020 >>>>> > > > CUDA Devices: >>>>> > > > >>>>> > > > 0 : Quadro T2000 7 5 >>>>> > > > Global memory: 3911 mb >>>>> > > > Shared memory: 48 kb >>>>> > > > Constant memory: 64 kb >>>>> > > > Block registers: 65536 >>>>> > > > >>>>> > > /home/balay/petsc/src/snes/tutorials >>>>> > > Possible problem with ex19 running with superlu_dist, diffs above >>>>> > > ========================================= >>>>> > > Fortran example src/snes/tutorials/ex5f run successfully with 1 >>>>> MPI process >>>>> > > Completed test examples >>>>> > > >>>>> > > >>>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: >>>>> > > >>>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay >>>>> wrote: >>>>> > > > >>>>> > > > > The build should work. It should give some verbose info [at >>>>> runtime] >>>>> > > > > regarding GPUs - from the following code. >>>>> > > > > >>>>> > > > > >>>>> > > > I don't see that and I am running GPUs in my code and have gotten >>>>> > > cusparse >>>>> > > > LU to run. Should I use '-info :sys:' ? >>>>> > > > >>>>> > > > >>>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> >>>>> > > > > void DisplayHeader() >>>>> > > > > { >>>>> > > > > const int kb = 1024; >>>>> > > > > const int mb = kb * kb; >>>>> > > > > // cout << "NBody.GPU" << endl << "=========" << endl << >>>>> endl; >>>>> > > > > >>>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); >>>>> > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << >>>>> "." << >>>>> > > > > THRUST_MINOR_VERSION << endl << endl; >>>>> > > > > >>>>> > > > > int devCount; >>>>> > > > > cudaGetDeviceCount(&devCount); >>>>> > > > > printf( "CUDA Devices: \n \n"); >>>>> > > > > >>>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>>>> > > > > >>>>> > > > > Satish >>>>> > > > > >>>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: >>>>> > > > > >>>>> > > > > > I remember Barry said superlu gpu support is broken. >>>>> > > > > > --Junchao Zhang >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams >>>>> wrote: >>>>> > > > > > >>>>> > > > > > > How does one use SuperLU with GPUs. I don't seem to get >>>>> any GPU >>>>> > > > > > > performance data so I assume GPUs are not getting turned >>>>> on. Am I >>>>> > > wrong >>>>> > > > > > > about that? >>>>> > > > > > > >>>>> > > > > > > I configure with: >>>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC >>>>> -fopenmp" >>>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 >>>>> -fPIC >>>>> > > > > -fopenmp" >>>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 >>>>> > > --with-cxx=mpicxx >>>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc >>>>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 >>>>> > > --download-metis >>>>> > > > > > > --download-superlu --download-superlu_dist >>>>> --with-make-np=16 >>>>> > > > > > > --download-parmetis --download-triangle >>>>> > > > > > > >>>>> > > > > >>>>> > > >>>>> --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 >>>>> > > > > > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 >>>>> > > --with-x=0 >>>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 >>>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 >>>>> > > > > > > --with-threadsaftey=1 --with-log=1 >>>>> > > > > > > >>>>> > > > > > > Thanks, >>>>> > > > > > > Mark >>>>> > > > > > > >>>>> > > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > >>>>> > > >>>>> > > >>>>> > >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Apr 19 07:16:51 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 19 Apr 2020 08:16:51 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: On Sat, Apr 18, 2020 at 9:04 PM Xiaoye S. Li wrote: > That works, but your previous email showed the following: > Ah, so PETSc must switch internally. Is there any reason why we should not use superlu_dist all of the time? > > SuperLU: > Version: 5.2.1 > Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > Library: -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > which is serial superlu, not superlu_dist. These are 2 different codes. > > Sherry > > On Sat, Apr 18, 2020 at 4:54 PM Mark Adams wrote: > >> >> >> On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li wrote: >> >>> Mark, >>> >>> It seems you are talking about serial superlu? There is no GPU support >>> in it. Only superlu_dist has GPU. >>> >> >> I am using superlu_dist on one processor. Should that work? >> >> >>> >>> But I don't know why there is a crash. >>> >>> Sherry >>> >>> On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote: >>> >>>> Sherry, I did rebase with master this week: >>>> >>>> SuperLU: >>>> Version: 5.2.1 >>>> Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include >>>> Library: >>>> -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib >>>> -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu >>>> >>>> I see the same thing with a debug build. >>>> >>>> If anyone is interested in looking at this, I was also able to see that >>>> plex/ex10 in my branch, which is a very simple test , also does not crash >>>> and also does not seem to use GPUs in SuperLU. >>>> >>>> >>>> On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: >>>> >>>>> When you install "-download-superlu_dist", that is from 'master' >>>>> branch? >>>>> >>>>> In the error trace, I recognized this: >>>>> >>>>> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- >>>>> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ >>>>> LU+0xc4)[0x20000195aff4] >>>>> >>>>> This is to free the L and U data structures at the end of the program. >>>>> >>>>> Sherry >>>>> >>>>> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote: >>>>> >>>>>> Back to SuperLU + GPUs (adding Sherry) >>>>>> >>>>>> I get this error (appended) running 'check', as I said before. It >>>>>> looks like ex19 is *failing* with CUDA but it is not clear it has >>>>>> anything to do with SuperLU. I can not find these diagnostics that got >>>>>> printed after the error in PETSc or SuperLU. >>>>>> >>>>>> So this is a problem, but moving on to my code (plex/ex11 in >>>>>> mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. >>>>>> I use superlu and GPUs, but they do not seem to be used in SuperLU: >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------------------------------------------------ >>>>>> Event Count Time (sec) Flop >>>>>> --- Global --- --- Stage ---- Total GPU - CpuToGpu - - >>>>>> GpuToCpu - GPU >>>>>> Max Ratio Max Ratio Max Ratio Mess >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size >>>>>> Count Size %F >>>>>> >>>>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------- >>>>>> .... >>>>>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 >>>>>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 >>>>>> 0.00e+00 0 0.00e+00 0* >>>>>> >>>>>> No CUDA version. The times are the same and no GPU >>>>>> communication above. So SuperLU does not seem to be using GPUs. >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------------------------------------------------ >>>>>> Event Count Time (sec) Flop >>>>>> --- Global --- --- Stage ---- Total >>>>>> Max Ratio Max Ratio Max Ratio Mess >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >>>>>> >>>>>> ------------------------------------------------------------------------------------------------------------------------ >>>>>> .... >>>>>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 >>>>>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 >>>>>> >>>>>> There are some differences: ex19 use DMDA and I use DMPlex, 'check' >>>>>> is run in my home directory, where files can not be written, and I run my >>>>>> code in the project areas. >>>>>> >>>>>> The timings are different without superlu so I think superlu is being >>>>>> used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type >>>>>> sell) >>>>>> >>>>>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view >>>>>> hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 >>>>>> -petscspace_degree 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 >>>>>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 >>>>>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor >>>>>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason >>>>>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover >>>>>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 >>>>>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 >>>>>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 >>>>>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly >>>>>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 >>>>>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 >>>>>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 >>>>>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: >>>>>> -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type >>>>>> superlu -mat_superlu_equil -dm_mat_type sell >>>>>> >>>>>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned >>>>>> on in SuperLU. >>>>>> Thoughts? >>>>>> >>>>>> Thanks, >>>>>> Mark >>>>>> >>>>>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make >>>>>> PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >>>>>> check >>>>>> Running check examples to verify correct installation >>>>>> Using PETSC_DIR=/ccs/home/adams/petsc and >>>>>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>>>>> process >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>>>>> processes >>>>>> 2c2,39 >>>>>> < Number of SNES iterations = 2 >>>>>> --- >>>>>> >>>>>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): >>>>>> Assertion `cacheNode != __null' failed.*> [h50n09:102287] *** >>>>>> Process received signal *** >>>>>> > CUDA version: v 10010 >>>>>> > CUDA Devices: >>>>>> > >>>>>> > 0 : Tesla V100-SXM2-16GB 7 0 >>>>>> > Global memory: 16128 mb >>>>>> > Shared memory: 48 kb >>>>>> > Constant memory: 64 kb >>>>>> > Block registers: 65536 >>>>>> > >>>>>> > [h50n09:102287] Signal: Aborted (6) >>>>>> > [h50n09:102287] Associated errno: Unknown error 1072693248 >>>>>> (1072693248) >>>>>> > [h50n09:102287] Signal code: User function (kill, sigsend, abort, >>>>>> etc.) (0) >>>>>> > [h50n09:102287] [ 0] [0x2000000504d8] >>>>>> > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] >>>>>> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] >>>>>> > [h50n09:102287] [ 3] >>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] >>>>>> > [h50n09:102287] [ 4] >>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >>>>>> > [h50n09:102287] [ 5] >>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >>>>>> > [h50n09:102287] [ 6] >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] >>>>>> > [h50n09:102287] [ 7] >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] >>>>>> > [h50n09:102287] [ 8] >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] >>>>>> > [h50n09:102287] [ 9] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] >>>>>> > [h50n09:102287] [10] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] >>>>>> > [h50n09:102287] [11] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] >>>>>> > [h50n09:102287] [12] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] >>>>>> > [h50n09:102287] [13] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] >>>>>> > [h50n09:102287] [14] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] >>>>>> > [h50n09:102287] [15] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] >>>>>> > [h50n09:102287] [16] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] >>>>>> > [h50n09:102287] [17] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] >>>>>> > [h50n09:102287] [18] >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] >>>>>> > [h50n09:102287] [19] ./ex19[0x10001a6c] >>>>>> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] >>>>>> > [h50n09:102287] [21] >>>>>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] >>>>>> > [h50n09:102287] *** End of error message *** >>>>>> > ERROR: One or more process (first noticed rank 0) terminated with >>>>>> signal 6 >>>>>> /ccs/home/adams/petsc/src/snes/tutorials >>>>>> Possible problem with ex19 running with superlu_dist, diffs above >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> #!/usr/bin/env python >>>>>> if __name__ == '__main__': >>>>>> import sys >>>>>> import os >>>>>> sys.path.insert(0, os.path.abspath('config')) >>>>>> import configure >>>>>> configure_options = [ >>>>>> '--with-fc=0', >>>>>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', >>>>>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', >>>>>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', >>>>>> '--CUDAOPTFLAGS=-O2 -g', >>>>>> '--with-ssl=0', >>>>>> '--with-batch=0', >>>>>> '--with-cxx=mpicxx', >>>>>> '--with-mpiexec=jsrun -g1', >>>>>> '--with-cuda=1', >>>>>> '--with-cudac=nvcc', >>>>>> '--download-p4est=1', >>>>>> '--download-zlib', >>>>>> '--download-hdf5=1', >>>>>> '--download-metis', >>>>>> '--download-superlu', >>>>>> '--download-superlu_dist', >>>>>> '--with-make-np=16', >>>>>> # '--with-hwloc=0', >>>>>> '--download-parmetis', >>>>>> # '--download-hypre', >>>>>> '--download-triangle', >>>>>> # '--download-amgx', >>>>>> # '--download-fblaslapack', >>>>>> '--with-blaslapack-lib=-L' + >>>>>> os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack', >>>>>> '--with-cc=mpicc', >>>>>> # '--with-fc=mpif90', >>>>>> '--with-shared-libraries=1', >>>>>> # '--known-mpi-shared-libraries=1', >>>>>> '--with-x=0', >>>>>> '--with-64-bit-indices=0', >>>>>> '--with-debugging=0', >>>>>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', >>>>>> '--with-openmp=1', >>>>>> '--with-threadsaftey=1', >>>>>> '--with-log=1' >>>>>> ] >>>>>> configure.petsc_configure(configure_options) >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay >>>>>> wrote: >>>>>> >>>>>>> The crash is inside Superlu_DIST - so don't know what to suggest. >>>>>>> >>>>>>> Might have to debug this via debugger and check with Sherry. >>>>>>> >>>>>>> Satish >>>>>>> >>>>>>> On Wed, 15 Apr 2020, Mark Adams wrote: >>>>>>> >>>>>>> > Ah, OK 'check' will test SuperLU. Semi worked: >>>>>>> > >>>>>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make >>>>>>> > PETSC_DIR=/ccs/home/adams/petsc >>>>>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>>>>>> > check >>>>>>> > Running check examples to verify correct installation >>>>>>> > Using PETSC_DIR=/ccs/home/adams/petsc and >>>>>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >>>>>>> process >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >>>>>>> processes >>>>>>> > 2c2,38 >>>>>>> > < Number of SNES iterations = 2 >>>>>>> > --- >>>>>>> > > CUDA version: v 10010 >>>>>>> > > CUDA Devices: >>>>>>> > > >>>>>>> > > 0 : Tesla V100-SXM2-16GB 7 0 >>>>>>> > > Global memory: 16128 mb >>>>>>> > > Shared memory: 48 kb >>>>>>> > > Constant memory: 64 kb >>>>>>> > > Block registers: 65536 >>>>>>> > > >>>>>>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): >>>>>>> Assertion >>>>>>> > `cacheNode != __null' failed. >>>>>>> > > [h16n07:78357] *** Process received signal *** >>>>>>> > > [h16n07:78357] Signal: Aborted (6) >>>>>>> > > [h16n07:78357] Signal code: (1704218624) >>>>>>> > > [h16n07:78357] [ 0] [0x2000000504d8] >>>>>>> > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] >>>>>>> > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] >>>>>>> > > [h16n07:78357] [ 3] >>>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] >>>>>>> > > [h16n07:78357] [ 4] >>>>>>> > >>>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >>>>>>> > > [h16n07:78357] [ 5] >>>>>>> > >>>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >>>>>>> > > [h16n07:78357] [ 6] >>>>>>> > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] >>>>>>> > > [h16n07:78357] [ 7] >>>>>>> > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] >>>>>>> > > [h16n07:78357] [ 8] >>>>>>> > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] >>>>>>> > > [h16n07:78357] [ 9] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] >>>>>>> > > [h16n07:78357] [10] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] >>>>>>> > > [h16n07:78357] [11] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] >>>>>>> > > [h16n07:78357] [12] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] >>>>>>> > > [h16n07:78357] [13] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] >>>>>>> > > [h16n07:78357] [14] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] >>>>>>> > > [h16n07:78357] [15] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] >>>>>>> > > [h16n07:78357] [16] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] >>>>>>> > > [h16n07:78357] [17] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] >>>>>>> > > [h16n07:78357] [18] >>>>>>> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] >>>>>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] >>>>>>> > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] >>>>>>> > > [h16n07:78357] [21] >>>>>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] >>>>>>> > > [h16n07:78357] *** End of error message *** >>>>>>> > > ERROR: One or more process (first noticed rank 0) terminated >>>>>>> with signal >>>>>>> > 6 >>>>>>> > /ccs/home/adams/petsc/src/snes/tutorials >>>>>>> > Possible problem with ex19 running with superlu_dist, diffs above >>>>>>> > ========================================= >>>>>>> > >>>>>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay >>>>>>> wrote: >>>>>>> > >>>>>>> > > Please send configure.log >>>>>>> > > >>>>>>> > > This is what I get on my linux build: >>>>>>> > > >>>>>>> > > [balay at p1 petsc]$ ./configure >>>>>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 >>>>>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make check >>>>>>> > > >>>>>>> > > Running check examples to verify correct installation >>>>>>> > > Using PETSC_DIR=/home/balay/petsc and >>>>>>> PETSC_ARCH=arch-linux-c-debug >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 >>>>>>> MPI process >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 >>>>>>> MPI processes >>>>>>> > > 1a2,19 >>>>>>> > > > CUDA version: v 10020 >>>>>>> > > > CUDA Devices: >>>>>>> > > > >>>>>>> > > > 0 : Quadro T2000 7 5 >>>>>>> > > > Global memory: 3911 mb >>>>>>> > > > Shared memory: 48 kb >>>>>>> > > > Constant memory: 64 kb >>>>>>> > > > Block registers: 65536 >>>>>>> > > > >>>>>>> > > > CUDA version: v 10020 >>>>>>> > > > CUDA Devices: >>>>>>> > > > >>>>>>> > > > 0 : Quadro T2000 7 5 >>>>>>> > > > Global memory: 3911 mb >>>>>>> > > > Shared memory: 48 kb >>>>>>> > > > Constant memory: 64 kb >>>>>>> > > > Block registers: 65536 >>>>>>> > > > >>>>>>> > > /home/balay/petsc/src/snes/tutorials >>>>>>> > > Possible problem with ex19 running with superlu_dist, diffs above >>>>>>> > > ========================================= >>>>>>> > > Fortran example src/snes/tutorials/ex5f run successfully with 1 >>>>>>> MPI process >>>>>>> > > Completed test examples >>>>>>> > > >>>>>>> > > >>>>>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: >>>>>>> > > >>>>>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay < >>>>>>> balay at mcs.anl.gov> wrote: >>>>>>> > > > >>>>>>> > > > > The build should work. It should give some verbose info [at >>>>>>> runtime] >>>>>>> > > > > regarding GPUs - from the following code. >>>>>>> > > > > >>>>>>> > > > > >>>>>>> > > > I don't see that and I am running GPUs in my code and have >>>>>>> gotten >>>>>>> > > cusparse >>>>>>> > > > LU to run. Should I use '-info :sys:' ? >>>>>>> > > > >>>>>>> > > > >>>>>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> >>>>>>> > > > > void DisplayHeader() >>>>>>> > > > > { >>>>>>> > > > > const int kb = 1024; >>>>>>> > > > > const int mb = kb * kb; >>>>>>> > > > > // cout << "NBody.GPU" << endl << "=========" << endl << >>>>>>> endl; >>>>>>> > > > > >>>>>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); >>>>>>> > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << >>>>>>> "." << >>>>>>> > > > > THRUST_MINOR_VERSION << endl << endl; >>>>>>> > > > > >>>>>>> > > > > int devCount; >>>>>>> > > > > cudaGetDeviceCount(&devCount); >>>>>>> > > > > printf( "CUDA Devices: \n \n"); >>>>>>> > > > > >>>>>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >>>>>>> > > > > >>>>>>> > > > > Satish >>>>>>> > > > > >>>>>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: >>>>>>> > > > > >>>>>>> > > > > > I remember Barry said superlu gpu support is broken. >>>>>>> > > > > > --Junchao Zhang >>>>>>> > > > > > >>>>>>> > > > > > >>>>>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams < >>>>>>> mfadams at lbl.gov> wrote: >>>>>>> > > > > > >>>>>>> > > > > > > How does one use SuperLU with GPUs. I don't seem to get >>>>>>> any GPU >>>>>>> > > > > > > performance data so I assume GPUs are not getting turned >>>>>>> on. Am I >>>>>>> > > wrong >>>>>>> > > > > > > about that? >>>>>>> > > > > > > >>>>>>> > > > > > > I configure with: >>>>>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC >>>>>>> -fopenmp" >>>>>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g >>>>>>> -O2 -fPIC >>>>>>> > > > > -fopenmp" >>>>>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 >>>>>>> > > --with-cxx=mpicxx >>>>>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 >>>>>>> --with-cudac=nvcc >>>>>>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 >>>>>>> > > --download-metis >>>>>>> > > > > > > --download-superlu --download-superlu_dist >>>>>>> --with-make-np=16 >>>>>>> > > > > > > --download-parmetis --download-triangle >>>>>>> > > > > > > >>>>>>> > > > > >>>>>>> > > >>>>>>> --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 >>>>>>> > > > > > > -lblas -llapack" --with-cc=mpicc >>>>>>> --with-shared-libraries=1 >>>>>>> > > --with-x=0 >>>>>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 >>>>>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 >>>>>>> > > > > > > --with-threadsaftey=1 --with-log=1 >>>>>>> > > > > > > >>>>>>> > > > > > > Thanks, >>>>>>> > > > > > > Mark >>>>>>> > > > > > > >>>>>>> > > > > > >>>>>>> > > > > >>>>>>> > > > > >>>>>>> > > > >>>>>>> > > >>>>>>> > > >>>>>>> > >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sun Apr 19 07:25:22 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sun, 19 Apr 2020 09:25:22 -0300 Subject: [petsc-users] Ignoring PETSC_ARCH for make check? In-Reply-To: <874ktgcynm.fsf@jedbrown.org> References: <874ktgcynm.fsf@jedbrown.org> Message-ID: Ok, the, the second option applies... I had forgotten about this observation from the multiple times I installed PETSc in the past. Then, two questions come to mind: 1. Why is it set up like that? 2. What is the difference in behaviour? I see the same output from both options. Thanks again! On Sat, Apr 18, 2020 at 5:43 PM Jed Brown wrote: > It's intentional and been like this for ages. Prefix installs have only > PETSC_DIR (just a path, like other packages), and *must not* set > PETSC_ARCH. > > san.temporal at gmail.com writes: > > > Hi all, > > > > I have just successfully compiled 3.13.0. But with install this is what I > > get > > > > $ make > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > PETSC_ARCH=arch-linux2-c-opt install > > *** Using > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > PETSC_ARCH=arch-linux2-c-opt *** > > *** Installing PETSc at prefix location: /home/santiago/usr/local > *** > > ==================================== > > Install complete. > > Now to check if the libraries are working do (in current directory): > > make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" check > > ==================================== > > /usr/bin/make --no-print-directory -f makefile > > PETSC_ARCH=arch-linux2-c-opt > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > mpi4py-install petsc4py-install libmesh-install mfem-install > slepc-install > > hpddm-install amrex-install > > make[2]: Nothing to be done for 'mpi4py-install'. > > make[2]: Nothing to be done for 'petsc4py-install'. > > make[2]: Nothing to be done for 'libmesh-install'. > > make[2]: Nothing to be done for 'mfem-install'. > > make[2]: Nothing to be done for 'slepc-install'. > > make[2]: Nothing to be done for 'hpddm-install'. > > make[2]: Nothing to be done for 'amrex-install'. > > > > What is strange to me is that I am instructed to execute a line with > > PETSC_ARCH=", while my environment has PETSC_ARCH=arch-linux2-c-opt > > Why is that? > > > > PS: The same happened to me with various other compilations I have just > > tested, with 3.9, 3.10, 3.11, 3.12 > > > > PS2: I do not recall seeing this ever before, although I may have missed > > it/forgotten. > > > > Thanks in advance, > > Santiago > -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sun Apr 19 07:49:06 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sun, 19 Apr 2020 09:49:06 -0300 Subject: [petsc-users] Mumps giving huge volume of output Message-ID: I have just successfully compiled PETSc with ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 --with-shared-libraries --with-packages-download-dir=/home/santiago/Documents/installers/petsc --download-fblaslapack --download-mumps --download-scalapack --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 -march=native -mtune=native' When I run my program, I get very large amounts of information from mumps. For example, the first such block starts with Entering DMUMPS 5.2.1 from C interface with JOB, N = 1 15 executing #MPI = 8, without OMP ================================================= This MUMPS version includes code for SAVE_RESTORE ================================================= L U Solver for unsymmetric matrices Type of parallelism: Working host ****** ANALYSIS STEP ******** ** Max-trans not allowed because matrix is distributed Entering analysis phase with ... Is there a way to control (and eliminate) that output? I guessed with --with-debugging=0 would do the trick, but that was not the case. Thanks, Santiago -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sun Apr 19 10:15:26 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 19 Apr 2020 10:15:26 -0500 (CDT) Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: On Sun, 19 Apr 2020, Mark Adams wrote: > On Sat, Apr 18, 2020 at 9:04 PM Xiaoye S. Li wrote: > > > That works, but your previous email showed the following: > > > > Ah, so PETSc must switch internally. I don't think so > > Is there any reason why we should not use superlu_dist all of the time? > > --download-superlu --download-superlu_dist You are installing with both superlu and superlu_dist. To verify - remove superlu - and keep only superlu_dist Satish > > > > > > SuperLU: > > Version: 5.2.1 > > Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > > Library: -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > > > which is serial superlu, not superlu_dist. These are 2 different codes. > > > > Sherry > > > > On Sat, Apr 18, 2020 at 4:54 PM Mark Adams wrote: > > > >> > >> > >> On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li wrote: > >> > >>> Mark, > >>> > >>> It seems you are talking about serial superlu? There is no GPU support > >>> in it. Only superlu_dist has GPU. > >>> > >> > >> I am using superlu_dist on one processor. Should that work? > >> > >> > >>> > >>> But I don't know why there is a crash. > >>> > >>> Sherry > >>> > >>> On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote: > >>> > >>>> Sherry, I did rebase with master this week: > >>>> > >>>> SuperLU: > >>>> Version: 5.2.1 > >>>> Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > >>>> Library: > >>>> -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > >>>> -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > >>>> > >>>> I see the same thing with a debug build. > >>>> > >>>> If anyone is interested in looking at this, I was also able to see that > >>>> plex/ex10 in my branch, which is a very simple test , also does not crash > >>>> and also does not seem to use GPUs in SuperLU. > >>>> > >>>> > >>>> On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: > >>>> > >>>>> When you install "-download-superlu_dist", that is from 'master' > >>>>> branch? > >>>>> > >>>>> In the error trace, I recognized this: > >>>>> > >>>>> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- > >>>>> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ > >>>>> LU+0xc4)[0x20000195aff4] > >>>>> > >>>>> This is to free the L and U data structures at the end of the program. > >>>>> > >>>>> Sherry > >>>>> > >>>>> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote: > >>>>> > >>>>>> Back to SuperLU + GPUs (adding Sherry) > >>>>>> > >>>>>> I get this error (appended) running 'check', as I said before. It > >>>>>> looks like ex19 is *failing* with CUDA but it is not clear it has > >>>>>> anything to do with SuperLU. I can not find these diagnostics that got > >>>>>> printed after the error in PETSc or SuperLU. > >>>>>> > >>>>>> So this is a problem, but moving on to my code (plex/ex11 in > >>>>>> mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. > >>>>>> I use superlu and GPUs, but they do not seem to be used in SuperLU: > >>>>>> > >>>>>> > >>>>>> ------------------------------------------------------------------------------------------------------------------------ > >>>>>> Event Count Time (sec) Flop > >>>>>> --- Global --- --- Stage ---- Total GPU - CpuToGpu - - > >>>>>> GpuToCpu - GPU > >>>>>> Max Ratio Max Ratio Max Ratio Mess > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size > >>>>>> Count Size %F > >>>>>> > >>>>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------- > >>>>>> .... > >>>>>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 > >>>>>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 > >>>>>> 0.00e+00 0 0.00e+00 0* > >>>>>> > >>>>>> No CUDA version. The times are the same and no GPU > >>>>>> communication above. So SuperLU does not seem to be using GPUs. > >>>>>> > >>>>>> > >>>>>> ------------------------------------------------------------------------------------------------------------------------ > >>>>>> Event Count Time (sec) Flop > >>>>>> --- Global --- --- Stage ---- Total > >>>>>> Max Ratio Max Ratio Max Ratio Mess > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > >>>>>> > >>>>>> ------------------------------------------------------------------------------------------------------------------------ > >>>>>> .... > >>>>>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 > >>>>>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 > >>>>>> > >>>>>> There are some differences: ex19 use DMDA and I use DMPlex, 'check' > >>>>>> is run in my home directory, where files can not be written, and I run my > >>>>>> code in the project areas. > >>>>>> > >>>>>> The timings are different without superlu so I think superlu is being > >>>>>> used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type > >>>>>> sell) > >>>>>> > >>>>>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view > >>>>>> hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 > >>>>>> -petscspace_degree 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 > >>>>>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 > >>>>>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor > >>>>>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason > >>>>>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover > >>>>>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 > >>>>>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 > >>>>>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 > >>>>>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly > >>>>>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 > >>>>>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 > >>>>>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 > >>>>>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: > >>>>>> -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type > >>>>>> superlu -mat_superlu_equil -dm_mat_type sell > >>>>>> > >>>>>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned > >>>>>> on in SuperLU. > >>>>>> Thoughts? > >>>>>> > >>>>>> Thanks, > >>>>>> Mark > >>>>>> > >>>>>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make > >>>>>> PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > >>>>>> check > >>>>>> Running check examples to verify correct installation > >>>>>> Using PETSC_DIR=/ccs/home/adams/petsc and > >>>>>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI > >>>>>> process > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI > >>>>>> processes > >>>>>> 2c2,39 > >>>>>> < Number of SNES iterations = 2 > >>>>>> --- > >>>>>> > >>>>>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): > >>>>>> Assertion `cacheNode != __null' failed.*> [h50n09:102287] *** > >>>>>> Process received signal *** > >>>>>> > CUDA version: v 10010 > >>>>>> > CUDA Devices: > >>>>>> > > >>>>>> > 0 : Tesla V100-SXM2-16GB 7 0 > >>>>>> > Global memory: 16128 mb > >>>>>> > Shared memory: 48 kb > >>>>>> > Constant memory: 64 kb > >>>>>> > Block registers: 65536 > >>>>>> > > >>>>>> > [h50n09:102287] Signal: Aborted (6) > >>>>>> > [h50n09:102287] Associated errno: Unknown error 1072693248 > >>>>>> (1072693248) > >>>>>> > [h50n09:102287] Signal code: User function (kill, sigsend, abort, > >>>>>> etc.) (0) > >>>>>> > [h50n09:102287] [ 0] [0x2000000504d8] > >>>>>> > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] > >>>>>> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] > >>>>>> > [h50n09:102287] [ 3] > >>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] > >>>>>> > [h50n09:102287] [ 4] > >>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > >>>>>> > [h50n09:102287] [ 5] > >>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > >>>>>> > [h50n09:102287] [ 6] > >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] > >>>>>> > [h50n09:102287] [ 7] > >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] > >>>>>> > [h50n09:102287] [ 8] > >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] > >>>>>> > [h50n09:102287] [ 9] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] > >>>>>> > [h50n09:102287] [10] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] > >>>>>> > [h50n09:102287] [11] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] > >>>>>> > [h50n09:102287] [12] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] > >>>>>> > [h50n09:102287] [13] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] > >>>>>> > [h50n09:102287] [14] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] > >>>>>> > [h50n09:102287] [15] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] > >>>>>> > [h50n09:102287] [16] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] > >>>>>> > [h50n09:102287] [17] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] > >>>>>> > [h50n09:102287] [18] > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] > >>>>>> > [h50n09:102287] [19] ./ex19[0x10001a6c] > >>>>>> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] > >>>>>> > [h50n09:102287] [21] > >>>>>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] > >>>>>> > [h50n09:102287] *** End of error message *** > >>>>>> > ERROR: One or more process (first noticed rank 0) terminated with > >>>>>> signal 6 > >>>>>> /ccs/home/adams/petsc/src/snes/tutorials > >>>>>> Possible problem with ex19 running with superlu_dist, diffs above > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> #!/usr/bin/env python > >>>>>> if __name__ == '__main__': > >>>>>> import sys > >>>>>> import os > >>>>>> sys.path.insert(0, os.path.abspath('config')) > >>>>>> import configure > >>>>>> configure_options = [ > >>>>>> '--with-fc=0', > >>>>>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', > >>>>>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', > >>>>>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', > >>>>>> '--CUDAOPTFLAGS=-O2 -g', > >>>>>> '--with-ssl=0', > >>>>>> '--with-batch=0', > >>>>>> '--with-cxx=mpicxx', > >>>>>> '--with-mpiexec=jsrun -g1', > >>>>>> '--with-cuda=1', > >>>>>> '--with-cudac=nvcc', > >>>>>> '--download-p4est=1', > >>>>>> '--download-zlib', > >>>>>> '--download-hdf5=1', > >>>>>> '--download-metis', > >>>>>> '--download-superlu', > >>>>>> '--download-superlu_dist', > >>>>>> '--with-make-np=16', > >>>>>> # '--with-hwloc=0', > >>>>>> '--download-parmetis', > >>>>>> # '--download-hypre', > >>>>>> '--download-triangle', > >>>>>> # '--download-amgx', > >>>>>> # '--download-fblaslapack', > >>>>>> '--with-blaslapack-lib=-L' + > >>>>>> os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack', > >>>>>> '--with-cc=mpicc', > >>>>>> # '--with-fc=mpif90', > >>>>>> '--with-shared-libraries=1', > >>>>>> # '--known-mpi-shared-libraries=1', > >>>>>> '--with-x=0', > >>>>>> '--with-64-bit-indices=0', > >>>>>> '--with-debugging=0', > >>>>>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', > >>>>>> '--with-openmp=1', > >>>>>> '--with-threadsaftey=1', > >>>>>> '--with-log=1' > >>>>>> ] > >>>>>> configure.petsc_configure(configure_options) > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay > >>>>>> wrote: > >>>>>> > >>>>>>> The crash is inside Superlu_DIST - so don't know what to suggest. > >>>>>>> > >>>>>>> Might have to debug this via debugger and check with Sherry. > >>>>>>> > >>>>>>> Satish > >>>>>>> > >>>>>>> On Wed, 15 Apr 2020, Mark Adams wrote: > >>>>>>> > >>>>>>> > Ah, OK 'check' will test SuperLU. Semi worked: > >>>>>>> > > >>>>>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make > >>>>>>> > PETSC_DIR=/ccs/home/adams/petsc > >>>>>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > >>>>>>> > check > >>>>>>> > Running check examples to verify correct installation > >>>>>>> > Using PETSC_DIR=/ccs/home/adams/petsc and > >>>>>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI > >>>>>>> process > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI > >>>>>>> processes > >>>>>>> > 2c2,38 > >>>>>>> > < Number of SNES iterations = 2 > >>>>>>> > --- > >>>>>>> > > CUDA version: v 10010 > >>>>>>> > > CUDA Devices: > >>>>>>> > > > >>>>>>> > > 0 : Tesla V100-SXM2-16GB 7 0 > >>>>>>> > > Global memory: 16128 mb > >>>>>>> > > Shared memory: 48 kb > >>>>>>> > > Constant memory: 64 kb > >>>>>>> > > Block registers: 65536 > >>>>>>> > > > >>>>>>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): > >>>>>>> Assertion > >>>>>>> > `cacheNode != __null' failed. > >>>>>>> > > [h16n07:78357] *** Process received signal *** > >>>>>>> > > [h16n07:78357] Signal: Aborted (6) > >>>>>>> > > [h16n07:78357] Signal code: (1704218624) > >>>>>>> > > [h16n07:78357] [ 0] [0x2000000504d8] > >>>>>>> > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] > >>>>>>> > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] > >>>>>>> > > [h16n07:78357] [ 3] > >>>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] > >>>>>>> > > [h16n07:78357] [ 4] > >>>>>>> > > >>>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > >>>>>>> > > [h16n07:78357] [ 5] > >>>>>>> > > >>>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > >>>>>>> > > [h16n07:78357] [ 6] > >>>>>>> > > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] > >>>>>>> > > [h16n07:78357] [ 7] > >>>>>>> > > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] > >>>>>>> > > [h16n07:78357] [ 8] > >>>>>>> > > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] > >>>>>>> > > [h16n07:78357] [ 9] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] > >>>>>>> > > [h16n07:78357] [10] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] > >>>>>>> > > [h16n07:78357] [11] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] > >>>>>>> > > [h16n07:78357] [12] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] > >>>>>>> > > [h16n07:78357] [13] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] > >>>>>>> > > [h16n07:78357] [14] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] > >>>>>>> > > [h16n07:78357] [15] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] > >>>>>>> > > [h16n07:78357] [16] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] > >>>>>>> > > [h16n07:78357] [17] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] > >>>>>>> > > [h16n07:78357] [18] > >>>>>>> > > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] > >>>>>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] > >>>>>>> > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] > >>>>>>> > > [h16n07:78357] [21] > >>>>>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] > >>>>>>> > > [h16n07:78357] *** End of error message *** > >>>>>>> > > ERROR: One or more process (first noticed rank 0) terminated > >>>>>>> with signal > >>>>>>> > 6 > >>>>>>> > /ccs/home/adams/petsc/src/snes/tutorials > >>>>>>> > Possible problem with ex19 running with superlu_dist, diffs above > >>>>>>> > ========================================= > >>>>>>> > > >>>>>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay > >>>>>>> wrote: > >>>>>>> > > >>>>>>> > > Please send configure.log > >>>>>>> > > > >>>>>>> > > This is what I get on my linux build: > >>>>>>> > > > >>>>>>> > > [balay at p1 petsc]$ ./configure > >>>>>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 > >>>>>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make check > >>>>>>> > > > >>>>>>> > > Running check examples to verify correct installation > >>>>>>> > > Using PETSC_DIR=/home/balay/petsc and > >>>>>>> PETSC_ARCH=arch-linux-c-debug > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 > >>>>>>> MPI process > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 > >>>>>>> MPI processes > >>>>>>> > > 1a2,19 > >>>>>>> > > > CUDA version: v 10020 > >>>>>>> > > > CUDA Devices: > >>>>>>> > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > >>>>>>> > > > Global memory: 3911 mb > >>>>>>> > > > Shared memory: 48 kb > >>>>>>> > > > Constant memory: 64 kb > >>>>>>> > > > Block registers: 65536 > >>>>>>> > > > > >>>>>>> > > > CUDA version: v 10020 > >>>>>>> > > > CUDA Devices: > >>>>>>> > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > >>>>>>> > > > Global memory: 3911 mb > >>>>>>> > > > Shared memory: 48 kb > >>>>>>> > > > Constant memory: 64 kb > >>>>>>> > > > Block registers: 65536 > >>>>>>> > > > > >>>>>>> > > /home/balay/petsc/src/snes/tutorials > >>>>>>> > > Possible problem with ex19 running with superlu_dist, diffs above > >>>>>>> > > ========================================= > >>>>>>> > > Fortran example src/snes/tutorials/ex5f run successfully with 1 > >>>>>>> MPI process > >>>>>>> > > Completed test examples > >>>>>>> > > > >>>>>>> > > > >>>>>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: > >>>>>>> > > > >>>>>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay < > >>>>>>> balay at mcs.anl.gov> wrote: > >>>>>>> > > > > >>>>>>> > > > > The build should work. It should give some verbose info [at > >>>>>>> runtime] > >>>>>>> > > > > regarding GPUs - from the following code. > >>>>>>> > > > > > >>>>>>> > > > > > >>>>>>> > > > I don't see that and I am running GPUs in my code and have > >>>>>>> gotten > >>>>>>> > > cusparse > >>>>>>> > > > LU to run. Should I use '-info :sys:' ? > >>>>>>> > > > > >>>>>>> > > > > >>>>>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > >>>>>>> > > > > void DisplayHeader() > >>>>>>> > > > > { > >>>>>>> > > > > const int kb = 1024; > >>>>>>> > > > > const int mb = kb * kb; > >>>>>>> > > > > // cout << "NBody.GPU" << endl << "=========" << endl << > >>>>>>> endl; > >>>>>>> > > > > > >>>>>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > >>>>>>> > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << > >>>>>>> "." << > >>>>>>> > > > > THRUST_MINOR_VERSION << endl << endl; > >>>>>>> > > > > > >>>>>>> > > > > int devCount; > >>>>>>> > > > > cudaGetDeviceCount(&devCount); > >>>>>>> > > > > printf( "CUDA Devices: \n \n"); > >>>>>>> > > > > > >>>>>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > >>>>>>> > > > > > >>>>>>> > > > > Satish > >>>>>>> > > > > > >>>>>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > >>>>>>> > > > > > >>>>>>> > > > > > I remember Barry said superlu gpu support is broken. > >>>>>>> > > > > > --Junchao Zhang > >>>>>>> > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams < > >>>>>>> mfadams at lbl.gov> wrote: > >>>>>>> > > > > > > >>>>>>> > > > > > > How does one use SuperLU with GPUs. I don't seem to get > >>>>>>> any GPU > >>>>>>> > > > > > > performance data so I assume GPUs are not getting turned > >>>>>>> on. Am I > >>>>>>> > > wrong > >>>>>>> > > > > > > about that? > >>>>>>> > > > > > > > >>>>>>> > > > > > > I configure with: > >>>>>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC > >>>>>>> -fopenmp" > >>>>>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g > >>>>>>> -O2 -fPIC > >>>>>>> > > > > -fopenmp" > >>>>>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > >>>>>>> > > --with-cxx=mpicxx > >>>>>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 > >>>>>>> --with-cudac=nvcc > >>>>>>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 > >>>>>>> > > --download-metis > >>>>>>> > > > > > > --download-superlu --download-superlu_dist > >>>>>>> --with-make-np=16 > >>>>>>> > > > > > > --download-parmetis --download-triangle > >>>>>>> > > > > > > > >>>>>>> > > > > > >>>>>>> > > > >>>>>>> --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > >>>>>>> > > > > > > -lblas -llapack" --with-cc=mpicc > >>>>>>> --with-shared-libraries=1 > >>>>>>> > > --with-x=0 > >>>>>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 > >>>>>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 > >>>>>>> > > > > > > --with-threadsaftey=1 --with-log=1 > >>>>>>> > > > > > > > >>>>>>> > > > > > > Thanks, > >>>>>>> > > > > > > Mark > >>>>>>> > > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > > >>>>>>> > > > > > >>>>>>> > > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> > > >>>>>>> > >>>>>>> > From balay at mcs.anl.gov Sun Apr 19 10:22:50 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 19 Apr 2020 10:22:50 -0500 (CDT) Subject: [petsc-users] Ignoring PETSC_ARCH for make check? In-Reply-To: References: <874ktgcynm.fsf@jedbrown.org> Message-ID: PETSc supports both inplace multiple builds - and prefix builds. Most packages don't support inplace multiple builds. For inplace multiple builds - you need the PETSC_ARCH concept. But not for prefix builds. i.e: 2 inplace builds. ./configure PETSC_ARCH=arch-build1 --with-cc=gcc etc.. make ./configure PETSC_ARCH=arch-build2 --with-cc=icc etc.. make 2 prefix builds ./configure --prefix=$HOME/soft/petsc-install-1 PETSC_ARCH=arch-build1 --with-cc=gcc make make install ./configure --prefix=$HOME/soft/petsc-install-1 PETSC_ARCH=arch-build2 --with-cc=icc make make install Note: using a different PETSC_ARCH above so that the intermediate build files from the first build don't conflict with those from the second build. Note: due to same files used in both cases - i.e prefix and inplace [i.e petsc sources from build location] for make check - a wrong value of 'PETSC_ARCH' can break 'make check' The files installed in prefix location don't care about PETSC_ARCH value. Satish On Sun, 19 Apr 2020, san.temporal at gmail.com wrote: > Ok, the, the second option applies... I had forgotten about this > observation from the multiple times I installed PETSc in the past. > > Then, two questions come to mind: > > 1. Why is it set up like that? > 2. What is the difference in behaviour? I see the same output from both > options. > > Thanks again! > > On Sat, Apr 18, 2020 at 5:43 PM Jed Brown wrote: > > > It's intentional and been like this for ages. Prefix installs have only > > PETSC_DIR (just a path, like other packages), and *must not* set > > PETSC_ARCH. > > > > san.temporal at gmail.com writes: > > > > > Hi all, > > > > > > I have just successfully compiled 3.13.0. But with install this is what I > > > get > > > > > > $ make > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > > PETSC_ARCH=arch-linux2-c-opt install > > > *** Using > > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > > PETSC_ARCH=arch-linux2-c-opt *** > > > *** Installing PETSc at prefix location: /home/santiago/usr/local > > *** > > > ==================================== > > > Install complete. > > > Now to check if the libraries are working do (in current directory): > > > make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" check > > > ==================================== > > > /usr/bin/make --no-print-directory -f makefile > > > PETSC_ARCH=arch-linux2-c-opt > > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > > mpi4py-install petsc4py-install libmesh-install mfem-install > > slepc-install > > > hpddm-install amrex-install > > > make[2]: Nothing to be done for 'mpi4py-install'. > > > make[2]: Nothing to be done for 'petsc4py-install'. > > > make[2]: Nothing to be done for 'libmesh-install'. > > > make[2]: Nothing to be done for 'mfem-install'. > > > make[2]: Nothing to be done for 'slepc-install'. > > > make[2]: Nothing to be done for 'hpddm-install'. > > > make[2]: Nothing to be done for 'amrex-install'. > > > > > > What is strange to me is that I am instructed to execute a line with > > > PETSC_ARCH=", while my environment has PETSC_ARCH=arch-linux2-c-opt > > > Why is that? > > > > > > PS: The same happened to me with various other compilations I have just > > > tested, with 3.9, 3.10, 3.11, 3.12 > > > > > > PS2: I do not recall seeing this ever before, although I may have missed > > > it/forgotten. > > > > > > Thanks in advance, > > > Santiago > > > From balay at mcs.anl.gov Sun Apr 19 10:38:48 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 19 Apr 2020 10:38:48 -0500 (CDT) Subject: [petsc-users] Mumps giving huge volume of output In-Reply-To: References: Message-ID: This is coming from mumps. PETSc configure option --with-debugging=0 does not control it. You might have to check which mumps option controls it - and perhaps use the runtime option -mat_mumps_icntl_(x) Satish On Sun, 19 Apr 2020, san.temporal at gmail.com wrote: > I have just successfully compiled PETSc with > > ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > --prefix=/home/santiago/usr/local --with-make-np=10 > --with-shared-libraries > --with-packages-download-dir=/home/santiago/Documents/installers/petsc > --download-fblaslapack --download-mumps --download-scalapack > --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' > FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 > -march=native -mtune=native' > > When I run my program, I get very large amounts of information from mumps. > For example, the first such block starts with > > Entering DMUMPS 5.2.1 from C interface with JOB, N = 1 15 > executing #MPI = 8, without OMP > > ================================================= > This MUMPS version includes code for SAVE_RESTORE > ================================================= > L U Solver for unsymmetric matrices > Type of parallelism: Working host > > ****** ANALYSIS STEP ******** > > ** Max-trans not allowed because matrix is distributed > > Entering analysis phase with ... > > Is there a way to control (and eliminate) that output? > I guessed with --with-debugging=0 would do the trick, but that was not the > case. > > Thanks, > Santiago > From mfadams at lbl.gov Sun Apr 19 10:40:54 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 19 Apr 2020 11:40:54 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: > > > > > > --download-superlu --download-superlu_dist > > You are installing with both superlu and superlu_dist. To verify - remove > superlu - and keep only superlu_dist > I tried this earlier. Here is the error message: 0 SNES Function norm 1.511918966798e-02 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers *[0]PETSC ERROR: Could not locate solver package superlu for factorization type LU and matrix type seqaij. Perhaps you must ./configure with --download-superlu*[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.13-163-g4c71feb GIT Date: 2020-04-18 15:35:50 -0400 [0]PETSC ERROR: ./ex112d on a arch-summit-opt-gnu-cuda-omp-2db named h23n05 by adams Sun Apr 19 11:39:05 2020 [0]PETSC ERROR: Configure options --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp -DFP_DIM=2" --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC -fopenmp" --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis --download-superlu_dist --with-make-np=16 --download-parmetis --download-triangle --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 --with-64-bit-indices=0 --with-debugging=0 PETSC_ARCH=arch-summit-opt-gnu-cuda-omp-2db --with-openmp=1 --with-threadsaftey=1 --with-log=1 [0]PETSC ERROR: #1 MatGetFactor() line 4490 in /autofs/nccs-svm1_home1/adams/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: #2 PCSetUp_LU() line 88 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: #3 PCSetUp() line 894 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #4 KSPSetUp() line 376 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #5 KSPSolve_Private() line 633 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #6 KSPSolve() line 853 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #7 SNESSolve_NEWTONLS() line 225 in /autofs/nccs-svm1_home1/adams/petsc/src/snes/impls/ls/ls.c [0]PETSC ERROR: #8 SNESSolve() line 4520 in /autofs/nccs-svm1_home1/adams/petsc/src/snes/interface/snes.c [0]PETSC ERROR: #9 TSStep_ARKIMEX() line 811 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/impls/arkimex/arkimex.c [0]PETSC ERROR: #10 TSStep() line 3721 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #11 TSSolve() line 4127 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #12 main() line 955 in ex11.c > > Satish > > > > > > > > > > > > SuperLU: > > > Version: 5.2.1 > > > Includes: > -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > > > Library: > -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > > > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > > > > > which is serial superlu, not superlu_dist. These are 2 different > codes. > > > > > > Sherry > > > > > > On Sat, Apr 18, 2020 at 4:54 PM Mark Adams wrote: > > > > > >> > > >> > > >> On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li wrote: > > >> > > >>> Mark, > > >>> > > >>> It seems you are talking about serial superlu? There is no GPU > support > > >>> in it. Only superlu_dist has GPU. > > >>> > > >> > > >> I am using superlu_dist on one processor. Should that work? > > >> > > >> > > >>> > > >>> But I don't know why there is a crash. > > >>> > > >>> Sherry > > >>> > > >>> On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote: > > >>> > > >>>> Sherry, I did rebase with master this week: > > >>>> > > >>>> SuperLU: > > >>>> Version: 5.2.1 > > >>>> Includes: > -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > > >>>> Library: > > >>>> -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > > >>>> -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > >>>> > > >>>> I see the same thing with a debug build. > > >>>> > > >>>> If anyone is interested in looking at this, I was also able to see > that > > >>>> plex/ex10 in my branch, which is a very simple test , also does not > crash > > >>>> and also does not seem to use GPUs in SuperLU. > > >>>> > > >>>> > > >>>> On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: > > >>>> > > >>>>> When you install "-download-superlu_dist", that is from 'master' > > >>>>> branch? > > >>>>> > > >>>>> In the error trace, I recognized this: > > >>>>> > > >>>>> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- > > >>>>> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ > > >>>>> LU+0xc4)[0x20000195aff4] > > >>>>> > > >>>>> This is to free the L and U data structures at the end of the > program. > > >>>>> > > >>>>> Sherry > > >>>>> > > >>>>> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams > wrote: > > >>>>> > > >>>>>> Back to SuperLU + GPUs (adding Sherry) > > >>>>>> > > >>>>>> I get this error (appended) running 'check', as I said before. It > > >>>>>> looks like ex19 is *failing* with CUDA but it is not clear it has > > >>>>>> anything to do with SuperLU. I can not find these diagnostics > that got > > >>>>>> printed after the error in PETSc or SuperLU. > > >>>>>> > > >>>>>> So this is a problem, but moving on to my code (plex/ex11 in > > >>>>>> mark/feature-xgc-interface-rebase-v2, configure script appended). > It runs. > > >>>>>> I use superlu and GPUs, but they do not seem to be used in > SuperLU: > > >>>>>> > > >>>>>> > > >>>>>> > ------------------------------------------------------------------------------------------------------------------------ > > >>>>>> Event Count Time (sec) Flop > > >>>>>> --- Global --- --- Stage ---- Total GPU - > CpuToGpu - - > > >>>>>> GpuToCpu - GPU > > >>>>>> Max Ratio Max Ratio Max Ratio Mess > > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s > Count Size > > >>>>>> Count Size %F > > >>>>>> > > >>>>>> > --------------------------------------------------------------------------------------------------------------------------------------------------------------- > > >>>>>> .... > > >>>>>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 > > >>>>>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 > *0 > > >>>>>> 0.00e+00 0 0.00e+00 0* > > >>>>>> > > >>>>>> No CUDA version. The times are the same and no GPU > > >>>>>> communication above. So SuperLU does not seem to be using GPUs. > > >>>>>> > > >>>>>> > > >>>>>> > ------------------------------------------------------------------------------------------------------------------------ > > >>>>>> Event Count Time (sec) Flop > > >>>>>> --- Global --- --- Stage ---- Total > > >>>>>> Max Ratio Max Ratio Max Ratio Mess > > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > >>>>>> > > >>>>>> > ------------------------------------------------------------------------------------------------------------------------ > > >>>>>> .... > > >>>>>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 > > >>>>>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 > > >>>>>> > > >>>>>> There are some differences: ex19 use DMDA and I use DMPlex, > 'check' > > >>>>>> is run in my home directory, where files can not be written, and > I run my > > >>>>>> code in the project areas. > > >>>>>> > > >>>>>> The timings are different without superlu so I think superlu is > being > > >>>>>> used. THis is how I run this (w and w/o -mat_superlu_equil > -dm_mat_type > > >>>>>> sell) > > >>>>>> > > >>>>>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view > > >>>>>> hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer > -Ez 0 > > >>>>>> -petscspace_degree 2 -mass_petscspace_degree 2 > -petscspace_poly_tensor 1 > > >>>>>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 > -ion_charges 2 > > >>>>>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor > > >>>>>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor > -snes_converged_reason > > >>>>>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover > > >>>>>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 > -ts_dt 1e-1 > > >>>>>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 > > >>>>>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed > 0.75 > > >>>>>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly > > >>>>>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 > -amr_z_refine2 0 > > >>>>>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 > -z_radius2 > > >>>>>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 > > >>>>>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info > :dm,tsadapt: > > >>>>>> -sub_thread_block_size 4 -options_left -log_view > -pc_factor_mat_solver_type > > >>>>>> superlu -mat_superlu_equil -dm_mat_type sell > > >>>>>> > > >>>>>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs > turned > > >>>>>> on in SuperLU. > > >>>>>> Thoughts? > > >>>>>> > > >>>>>> Thanks, > > >>>>>> Mark > > >>>>>> > > >>>>>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make > > >>>>>> PETSC_DIR=/ccs/home/adams/petsc > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > >>>>>> check > > >>>>>> Running check examples to verify correct installation > > >>>>>> Using PETSC_DIR=/ccs/home/adams/petsc and > > >>>>>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI > > >>>>>> process > > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI > > >>>>>> processes > > >>>>>> 2c2,39 > > >>>>>> < Number of SNES iterations = 2 > > >>>>>> --- > > >>>>>> > > >>>>>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): > > >>>>>> Assertion `cacheNode != __null' failed.*> [h50n09:102287] *** > > >>>>>> Process received signal *** > > >>>>>> > CUDA version: v 10010 > > >>>>>> > CUDA Devices: > > >>>>>> > > > >>>>>> > 0 : Tesla V100-SXM2-16GB 7 0 > > >>>>>> > Global memory: 16128 mb > > >>>>>> > Shared memory: 48 kb > > >>>>>> > Constant memory: 64 kb > > >>>>>> > Block registers: 65536 > > >>>>>> > > > >>>>>> > [h50n09:102287] Signal: Aborted (6) > > >>>>>> > [h50n09:102287] Associated errno: Unknown error 1072693248 > > >>>>>> (1072693248) > > >>>>>> > [h50n09:102287] Signal code: User function (kill, sigsend, > abort, > > >>>>>> etc.) (0) > > >>>>>> > [h50n09:102287] [ 0] [0x2000000504d8] > > >>>>>> > [h50n09:102287] [ 1] > /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] > > >>>>>> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] > > >>>>>> > [h50n09:102287] [ 3] > > >>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] > > >>>>>> > [h50n09:102287] [ 4] > > >>>>>> > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > >>>>>> > [h50n09:102287] [ 5] > > >>>>>> > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > >>>>>> > [h50n09:102287] [ 6] > > >>>>>> > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] > > >>>>>> > [h50n09:102287] [ 7] > > >>>>>> > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] > > >>>>>> > [h50n09:102287] [ 8] > > >>>>>> > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] > > >>>>>> > [h50n09:102287] [ 9] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] > > >>>>>> > [h50n09:102287] [10] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] > > >>>>>> > [h50n09:102287] [11] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] > > >>>>>> > [h50n09:102287] [12] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] > > >>>>>> > [h50n09:102287] [13] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] > > >>>>>> > [h50n09:102287] [14] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] > > >>>>>> > [h50n09:102287] [15] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] > > >>>>>> > [h50n09:102287] [16] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] > > >>>>>> > [h50n09:102287] [17] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] > > >>>>>> > [h50n09:102287] [18] > > >>>>>> > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] > > >>>>>> > [h50n09:102287] [19] ./ex19[0x10001a6c] > > >>>>>> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] > > >>>>>> > [h50n09:102287] [21] > > >>>>>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] > > >>>>>> > [h50n09:102287] *** End of error message *** > > >>>>>> > ERROR: One or more process (first noticed rank 0) terminated > with > > >>>>>> signal 6 > > >>>>>> /ccs/home/adams/petsc/src/snes/tutorials > > >>>>>> Possible problem with ex19 running with superlu_dist, diffs above > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> #!/usr/bin/env python > > >>>>>> if __name__ == '__main__': > > >>>>>> import sys > > >>>>>> import os > > >>>>>> sys.path.insert(0, os.path.abspath('config')) > > >>>>>> import configure > > >>>>>> configure_options = [ > > >>>>>> '--with-fc=0', > > >>>>>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', > > >>>>>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', > > >>>>>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', > > >>>>>> '--CUDAOPTFLAGS=-O2 -g', > > >>>>>> '--with-ssl=0', > > >>>>>> '--with-batch=0', > > >>>>>> '--with-cxx=mpicxx', > > >>>>>> '--with-mpiexec=jsrun -g1', > > >>>>>> '--with-cuda=1', > > >>>>>> '--with-cudac=nvcc', > > >>>>>> '--download-p4est=1', > > >>>>>> '--download-zlib', > > >>>>>> '--download-hdf5=1', > > >>>>>> '--download-metis', > > >>>>>> '--download-superlu', > > >>>>>> '--download-superlu_dist', > > >>>>>> '--with-make-np=16', > > >>>>>> # '--with-hwloc=0', > > >>>>>> '--download-parmetis', > > >>>>>> # '--download-hypre', > > >>>>>> '--download-triangle', > > >>>>>> # '--download-amgx', > > >>>>>> # '--download-fblaslapack', > > >>>>>> '--with-blaslapack-lib=-L' + > > >>>>>> os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack', > > >>>>>> '--with-cc=mpicc', > > >>>>>> # '--with-fc=mpif90', > > >>>>>> '--with-shared-libraries=1', > > >>>>>> # '--known-mpi-shared-libraries=1', > > >>>>>> '--with-x=0', > > >>>>>> '--with-64-bit-indices=0', > > >>>>>> '--with-debugging=0', > > >>>>>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', > > >>>>>> '--with-openmp=1', > > >>>>>> '--with-threadsaftey=1', > > >>>>>> '--with-log=1' > > >>>>>> ] > > >>>>>> configure.petsc_configure(configure_options) > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay > > >>>>>> wrote: > > >>>>>> > > >>>>>>> The crash is inside Superlu_DIST - so don't know what to suggest. > > >>>>>>> > > >>>>>>> Might have to debug this via debugger and check with Sherry. > > >>>>>>> > > >>>>>>> Satish > > >>>>>>> > > >>>>>>> On Wed, 15 Apr 2020, Mark Adams wrote: > > >>>>>>> > > >>>>>>> > Ah, OK 'check' will test SuperLU. Semi worked: > > >>>>>>> > > > >>>>>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make > > >>>>>>> > PETSC_DIR=/ccs/home/adams/petsc > > >>>>>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > >>>>>>> > check > > >>>>>>> > Running check examples to verify correct installation > > >>>>>>> > Using PETSC_DIR=/ccs/home/adams/petsc and > > >>>>>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 > MPI > > >>>>>>> process > > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 > MPI > > >>>>>>> processes > > >>>>>>> > 2c2,38 > > >>>>>>> > < Number of SNES iterations = 2 > > >>>>>>> > --- > > >>>>>>> > > CUDA version: v 10010 > > >>>>>>> > > CUDA Devices: > > >>>>>>> > > > > >>>>>>> > > 0 : Tesla V100-SXM2-16GB 7 0 > > >>>>>>> > > Global memory: 16128 mb > > >>>>>>> > > Shared memory: 48 kb > > >>>>>>> > > Constant memory: 64 kb > > >>>>>>> > > Block registers: 65536 > > >>>>>>> > > > > >>>>>>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): > > >>>>>>> Assertion > > >>>>>>> > `cacheNode != __null' failed. > > >>>>>>> > > [h16n07:78357] *** Process received signal *** > > >>>>>>> > > [h16n07:78357] Signal: Aborted (6) > > >>>>>>> > > [h16n07:78357] Signal code: (1704218624) > > >>>>>>> > > [h16n07:78357] [ 0] [0x2000000504d8] > > >>>>>>> > > [h16n07:78357] [ 1] > /lib64/libc.so.6(abort+0x2b4)[0x200023992094] > > >>>>>>> > > [h16n07:78357] [ 2] > /lib64/libc.so.6(+0x356d4)[0x2000239856d4] > > >>>>>>> > > [h16n07:78357] [ 3] > > >>>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] > > >>>>>>> > > [h16n07:78357] [ 4] > > >>>>>>> > > > >>>>>>> > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > >>>>>>> > > [h16n07:78357] [ 5] > > >>>>>>> > > > >>>>>>> > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > >>>>>>> > > [h16n07:78357] [ 6] > > >>>>>>> > > > >>>>>>> > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] > > >>>>>>> > > [h16n07:78357] [ 7] > > >>>>>>> > > > >>>>>>> > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] > > >>>>>>> > > [h16n07:78357] [ 8] > > >>>>>>> > > > >>>>>>> > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] > > >>>>>>> > > [h16n07:78357] [ 9] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] > > >>>>>>> > > [h16n07:78357] [10] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] > > >>>>>>> > > [h16n07:78357] [11] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] > > >>>>>>> > > [h16n07:78357] [12] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] > > >>>>>>> > > [h16n07:78357] [13] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] > > >>>>>>> > > [h16n07:78357] [14] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] > > >>>>>>> > > [h16n07:78357] [15] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] > > >>>>>>> > > [h16n07:78357] [16] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] > > >>>>>>> > > [h16n07:78357] [17] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] > > >>>>>>> > > [h16n07:78357] [18] > > >>>>>>> > > > >>>>>>> > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] > > >>>>>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] > > >>>>>>> > > [h16n07:78357] [20] > /lib64/libc.so.6(+0x25200)[0x200023975200] > > >>>>>>> > > [h16n07:78357] [21] > > >>>>>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] > > >>>>>>> > > [h16n07:78357] *** End of error message *** > > >>>>>>> > > ERROR: One or more process (first noticed rank 0) terminated > > >>>>>>> with signal > > >>>>>>> > 6 > > >>>>>>> > /ccs/home/adams/petsc/src/snes/tutorials > > >>>>>>> > Possible problem with ex19 running with superlu_dist, diffs > above > > >>>>>>> > ========================================= > > >>>>>>> > > > >>>>>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay < > balay at mcs.anl.gov> > > >>>>>>> wrote: > > >>>>>>> > > > >>>>>>> > > Please send configure.log > > >>>>>>> > > > > >>>>>>> > > This is what I get on my linux build: > > >>>>>>> > > > > >>>>>>> > > [balay at p1 petsc]$ ./configure > > >>>>>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda > --with-cuda=1 > > >>>>>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make > check > > >>>>>>> > > > > >>>>>>> > > Running check examples to verify correct installation > > >>>>>>> > > Using PETSC_DIR=/home/balay/petsc and > > >>>>>>> PETSC_ARCH=arch-linux-c-debug > > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 > > >>>>>>> MPI process > > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 > > >>>>>>> MPI processes > > >>>>>>> > > 1a2,19 > > >>>>>>> > > > CUDA version: v 10020 > > >>>>>>> > > > CUDA Devices: > > >>>>>>> > > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > > >>>>>>> > > > Global memory: 3911 mb > > >>>>>>> > > > Shared memory: 48 kb > > >>>>>>> > > > Constant memory: 64 kb > > >>>>>>> > > > Block registers: 65536 > > >>>>>>> > > > > > >>>>>>> > > > CUDA version: v 10020 > > >>>>>>> > > > CUDA Devices: > > >>>>>>> > > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > > >>>>>>> > > > Global memory: 3911 mb > > >>>>>>> > > > Shared memory: 48 kb > > >>>>>>> > > > Constant memory: 64 kb > > >>>>>>> > > > Block registers: 65536 > > >>>>>>> > > > > > >>>>>>> > > /home/balay/petsc/src/snes/tutorials > > >>>>>>> > > Possible problem with ex19 running with superlu_dist, diffs > above > > >>>>>>> > > ========================================= > > >>>>>>> > > Fortran example src/snes/tutorials/ex5f run successfully > with 1 > > >>>>>>> MPI process > > >>>>>>> > > Completed test examples > > >>>>>>> > > > > >>>>>>> > > > > >>>>>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: > > >>>>>>> > > > > >>>>>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay < > > >>>>>>> balay at mcs.anl.gov> wrote: > > >>>>>>> > > > > > >>>>>>> > > > > The build should work. It should give some verbose info > [at > > >>>>>>> runtime] > > >>>>>>> > > > > regarding GPUs - from the following code. > > >>>>>>> > > > > > > >>>>>>> > > > > > > >>>>>>> > > > I don't see that and I am running GPUs in my code and have > > >>>>>>> gotten > > >>>>>>> > > cusparse > > >>>>>>> > > > LU to run. Should I use '-info :sys:' ? > > >>>>>>> > > > > > >>>>>>> > > > > > >>>>>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > > >>>>>>> > > > > void DisplayHeader() > > >>>>>>> > > > > { > > >>>>>>> > > > > const int kb = 1024; > > >>>>>>> > > > > const int mb = kb * kb; > > >>>>>>> > > > > // cout << "NBody.GPU" << endl << "=========" << > endl << > > >>>>>>> endl; > > >>>>>>> > > > > > > >>>>>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > > >>>>>>> > > > > //cout << "Thrust version: v" << > THRUST_MAJOR_VERSION << > > >>>>>>> "." << > > >>>>>>> > > > > THRUST_MINOR_VERSION << endl << endl; > > >>>>>>> > > > > > > >>>>>>> > > > > int devCount; > > >>>>>>> > > > > cudaGetDeviceCount(&devCount); > > >>>>>>> > > > > printf( "CUDA Devices: \n \n"); > > >>>>>>> > > > > > > >>>>>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > >>>>>>> > > > > > > >>>>>>> > > > > Satish > > >>>>>>> > > > > > > >>>>>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > >>>>>>> > > > > > > >>>>>>> > > > > > I remember Barry said superlu gpu support is broken. > > >>>>>>> > > > > > --Junchao Zhang > > >>>>>>> > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams < > > >>>>>>> mfadams at lbl.gov> wrote: > > >>>>>>> > > > > > > > >>>>>>> > > > > > > How does one use SuperLU with GPUs. I don't seem to > get > > >>>>>>> any GPU > > >>>>>>> > > > > > > performance data so I assume GPUs are not getting > turned > > >>>>>>> on. Am I > > >>>>>>> > > wrong > > >>>>>>> > > > > > > about that? > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > I configure with: > > >>>>>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 > -fPIC > > >>>>>>> -fopenmp" > > >>>>>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g > > >>>>>>> -O2 -fPIC > > >>>>>>> > > > > -fopenmp" > > >>>>>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > > >>>>>>> > > --with-cxx=mpicxx > > >>>>>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 > > >>>>>>> --with-cudac=nvcc > > >>>>>>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 > > >>>>>>> > > --download-metis > > >>>>>>> > > > > > > --download-superlu --download-superlu_dist > > >>>>>>> --with-make-np=16 > > >>>>>>> > > > > > > --download-parmetis --download-triangle > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > >>>>>>> > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > >>>>>>> > > > > > > -lblas -llapack" --with-cc=mpicc > > >>>>>>> --with-shared-libraries=1 > > >>>>>>> > > --with-x=0 > > >>>>>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 > > >>>>>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > --with-openmp=1 > > >>>>>>> > > > > > > --with-threadsaftey=1 --with-log=1 > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > Thanks, > > >>>>>>> > > > > > > Mark > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > > >>>>>>> > > > > >>>>>>> > > > > >>>>>>> > > > >>>>>>> > > >>>>>>> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Sun Apr 19 10:44:36 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Sun, 19 Apr 2020 09:44:36 -0600 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: <3E48AEEF-ECC5-44E7-8457-1756717539F7@gmail.com> Hi Mark, This should help: -pc_factor_mat_solver_type superlu_dist Thanks, Fande > On Apr 19, 2020, at 9:41 AM, Mark Adams wrote: > > ? >> >> >> > > --download-superlu --download-superlu_dist >> >> You are installing with both superlu and superlu_dist. To verify - remove superlu - and keep only superlu_dist > > I tried this earlier. Here is the error message: > > 0 SNES Function norm 1.511918966798e-02 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers > [0]PETSC ERROR: Could not locate solver package superlu for factorization type LU and matrix type seqaij. Perhaps you must ./configure with --download-superlu > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.13-163-g4c71feb GIT Date: 2020-04-18 15:35:50 -0400 > [0]PETSC ERROR: ./ex112d on a arch-summit-opt-gnu-cuda-omp-2db named h23n05 by adams Sun Apr 19 11:39:05 2020 > [0]PETSC ERROR: Configure options --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp -DFP_DIM=2" --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC -fopenmp" --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 --with-cxx=mpicxx --with-mpiexec="jsrun -g1" --with-cuda=1 --with-cudac=nvcc --download-p4est=1 --download-zlib --download-hdf5=1 --download-metis --download-superlu_dist --with-make-np=16 --download-parmetis --download-triangle --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 --with-64-bit-indices=0 --with-debugging=0 PETSC_ARCH=arch-summit-opt-gnu-cuda-omp-2db --with-openmp=1 --with-threadsaftey=1 --with-log=1 > [0]PETSC ERROR: #1 MatGetFactor() line 4490 in /autofs/nccs-svm1_home1/adams/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: #2 PCSetUp_LU() line 88 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: #3 PCSetUp() line 894 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #4 KSPSetUp() line 376 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #5 KSPSolve_Private() line 633 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #6 KSPSolve() line 853 in /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #7 SNESSolve_NEWTONLS() line 225 in /autofs/nccs-svm1_home1/adams/petsc/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #8 SNESSolve() line 4520 in /autofs/nccs-svm1_home1/adams/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: #9 TSStep_ARKIMEX() line 811 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/impls/arkimex/arkimex.c > [0]PETSC ERROR: #10 TSStep() line 3721 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #11 TSSolve() line 4127 in /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #12 main() line 955 in ex11.c > >> >> Satish >> >> >> > >> > >> > > >> > > SuperLU: >> > > Version: 5.2.1 >> > > Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include >> > > Library: -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib >> > > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu >> > > >> > > which is serial superlu, not superlu_dist. These are 2 different codes. >> > > >> > > Sherry >> > > >> > > On Sat, Apr 18, 2020 at 4:54 PM Mark Adams wrote: >> > > >> > >> >> > >> >> > >> On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li wrote: >> > >> >> > >>> Mark, >> > >>> >> > >>> It seems you are talking about serial superlu? There is no GPU support >> > >>> in it. Only superlu_dist has GPU. >> > >>> >> > >> >> > >> I am using superlu_dist on one processor. Should that work? >> > >> >> > >> >> > >>> >> > >>> But I don't know why there is a crash. >> > >>> >> > >>> Sherry >> > >>> >> > >>> On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote: >> > >>> >> > >>>> Sherry, I did rebase with master this week: >> > >>>> >> > >>>> SuperLU: >> > >>>> Version: 5.2.1 >> > >>>> Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include >> > >>>> Library: >> > >>>> -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib >> > >>>> -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu >> > >>>> >> > >>>> I see the same thing with a debug build. >> > >>>> >> > >>>> If anyone is interested in looking at this, I was also able to see that >> > >>>> plex/ex10 in my branch, which is a very simple test , also does not crash >> > >>>> and also does not seem to use GPUs in SuperLU. >> > >>>> >> > >>>> >> > >>>> On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: >> > >>>> >> > >>>>> When you install "-download-superlu_dist", that is from 'master' >> > >>>>> branch? >> > >>>>> >> > >>>>> In the error trace, I recognized this: >> > >>>>> >> > >>>>> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- >> > >>>>> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ >> > >>>>> LU+0xc4)[0x20000195aff4] >> > >>>>> >> > >>>>> This is to free the L and U data structures at the end of the program. >> > >>>>> >> > >>>>> Sherry >> > >>>>> >> > >>>>> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote: >> > >>>>> >> > >>>>>> Back to SuperLU + GPUs (adding Sherry) >> > >>>>>> >> > >>>>>> I get this error (appended) running 'check', as I said before. It >> > >>>>>> looks like ex19 is *failing* with CUDA but it is not clear it has >> > >>>>>> anything to do with SuperLU. I can not find these diagnostics that got >> > >>>>>> printed after the error in PETSc or SuperLU. >> > >>>>>> >> > >>>>>> So this is a problem, but moving on to my code (plex/ex11 in >> > >>>>>> mark/feature-xgc-interface-rebase-v2, configure script appended). It runs. >> > >>>>>> I use superlu and GPUs, but they do not seem to be used in SuperLU: >> > >>>>>> >> > >>>>>> >> > >>>>>> ------------------------------------------------------------------------------------------------------------------------ >> > >>>>>> Event Count Time (sec) Flop >> > >>>>>> --- Global --- --- Stage ---- Total GPU - CpuToGpu - - >> > >>>>>> GpuToCpu - GPU >> > >>>>>> Max Ratio Max Ratio Max Ratio Mess >> > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s Count Size >> > >>>>>> Count Size %F >> > >>>>>> >> > >>>>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------- >> > >>>>>> .... >> > >>>>>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 >> > >>>>>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 *0 >> > >>>>>> 0.00e+00 0 0.00e+00 0* >> > >>>>>> >> > >>>>>> No CUDA version. The times are the same and no GPU >> > >>>>>> communication above. So SuperLU does not seem to be using GPUs. >> > >>>>>> >> > >>>>>> >> > >>>>>> ------------------------------------------------------------------------------------------------------------------------ >> > >>>>>> Event Count Time (sec) Flop >> > >>>>>> --- Global --- --- Stage ---- Total >> > >>>>>> Max Ratio Max Ratio Max Ratio Mess >> > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >> > >>>>>> >> > >>>>>> ------------------------------------------------------------------------------------------------------------------------ >> > >>>>>> .... >> > >>>>>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 >> > >>>>>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 >> > >>>>>> >> > >>>>>> There are some differences: ex19 use DMDA and I use DMPlex, 'check' >> > >>>>>> is run in my home directory, where files can not be written, and I run my >> > >>>>>> code in the project areas. >> > >>>>>> >> > >>>>>> The timings are different without superlu so I think superlu is being >> > >>>>>> used. THis is how I run this (w and w/o -mat_superlu_equil -dm_mat_type >> > >>>>>> sell) >> > >>>>>> >> > >>>>>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view >> > >>>>>> hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer -Ez 0 >> > >>>>>> -petscspace_degree 2 -mass_petscspace_degree 2 -petscspace_poly_tensor 1 >> > >>>>>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 -ion_charges 2 >> > >>>>>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor >> > >>>>>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor -snes_converged_reason >> > >>>>>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover >> > >>>>>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 -ts_dt 1e-1 >> > >>>>>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 >> > >>>>>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed 0.75 >> > >>>>>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly >> > >>>>>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 -amr_z_refine2 0 >> > >>>>>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 -z_radius2 >> > >>>>>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 >> > >>>>>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info :dm,tsadapt: >> > >>>>>> -sub_thread_block_size 4 -options_left -log_view -pc_factor_mat_solver_type >> > >>>>>> superlu -mat_superlu_equil -dm_mat_type sell >> > >>>>>> >> > >>>>>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs turned >> > >>>>>> on in SuperLU. >> > >>>>>> Thoughts? >> > >>>>>> >> > >>>>>> Thanks, >> > >>>>>> Mark >> > >>>>>> >> > >>>>>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make >> > >>>>>> PETSC_DIR=/ccs/home/adams/petsc PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >> > >>>>>> check >> > >>>>>> Running check examples to verify correct installation >> > >>>>>> Using PETSC_DIR=/ccs/home/adams/petsc and >> > >>>>>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp >> > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >> > >>>>>> process >> > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >> > >>>>>> processes >> > >>>>>> 2c2,39 >> > >>>>>> < Number of SNES iterations = 2 >> > >>>>>> --- >> > >>>>>> >> > >>>>>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): >> > >>>>>> Assertion `cacheNode != __null' failed.*> [h50n09:102287] *** >> > >>>>>> Process received signal *** >> > >>>>>> > CUDA version: v 10010 >> > >>>>>> > CUDA Devices: >> > >>>>>> > >> > >>>>>> > 0 : Tesla V100-SXM2-16GB 7 0 >> > >>>>>> > Global memory: 16128 mb >> > >>>>>> > Shared memory: 48 kb >> > >>>>>> > Constant memory: 64 kb >> > >>>>>> > Block registers: 65536 >> > >>>>>> > >> > >>>>>> > [h50n09:102287] Signal: Aborted (6) >> > >>>>>> > [h50n09:102287] Associated errno: Unknown error 1072693248 >> > >>>>>> (1072693248) >> > >>>>>> > [h50n09:102287] Signal code: User function (kill, sigsend, abort, >> > >>>>>> etc.) (0) >> > >>>>>> > [h50n09:102287] [ 0] [0x2000000504d8] >> > >>>>>> > [h50n09:102287] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] >> > >>>>>> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] >> > >>>>>> > [h50n09:102287] [ 3] >> > >>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] >> > >>>>>> > [h50n09:102287] [ 4] >> > >>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >> > >>>>>> > [h50n09:102287] [ 5] >> > >>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >> > >>>>>> > [h50n09:102287] [ 6] >> > >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] >> > >>>>>> > [h50n09:102287] [ 7] >> > >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] >> > >>>>>> > [h50n09:102287] [ 8] >> > >>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] >> > >>>>>> > [h50n09:102287] [ 9] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] >> > >>>>>> > [h50n09:102287] [10] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] >> > >>>>>> > [h50n09:102287] [11] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] >> > >>>>>> > [h50n09:102287] [12] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] >> > >>>>>> > [h50n09:102287] [13] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] >> > >>>>>> > [h50n09:102287] [14] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] >> > >>>>>> > [h50n09:102287] [15] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] >> > >>>>>> > [h50n09:102287] [16] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] >> > >>>>>> > [h50n09:102287] [17] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] >> > >>>>>> > [h50n09:102287] [18] >> > >>>>>> /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] >> > >>>>>> > [h50n09:102287] [19] ./ex19[0x10001a6c] >> > >>>>>> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] >> > >>>>>> > [h50n09:102287] [21] >> > >>>>>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] >> > >>>>>> > [h50n09:102287] *** End of error message *** >> > >>>>>> > ERROR: One or more process (first noticed rank 0) terminated with >> > >>>>>> signal 6 >> > >>>>>> /ccs/home/adams/petsc/src/snes/tutorials >> > >>>>>> Possible problem with ex19 running with superlu_dist, diffs above >> > >>>>>> >> > >>>>>> >> > >>>>>> >> > >>>>>> >> > >>>>>> #!/usr/bin/env python >> > >>>>>> if __name__ == '__main__': >> > >>>>>> import sys >> > >>>>>> import os >> > >>>>>> sys.path.insert(0, os.path.abspath('config')) >> > >>>>>> import configure >> > >>>>>> configure_options = [ >> > >>>>>> '--with-fc=0', >> > >>>>>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', >> > >>>>>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', >> > >>>>>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', >> > >>>>>> '--CUDAOPTFLAGS=-O2 -g', >> > >>>>>> '--with-ssl=0', >> > >>>>>> '--with-batch=0', >> > >>>>>> '--with-cxx=mpicxx', >> > >>>>>> '--with-mpiexec=jsrun -g1', >> > >>>>>> '--with-cuda=1', >> > >>>>>> '--with-cudac=nvcc', >> > >>>>>> '--download-p4est=1', >> > >>>>>> '--download-zlib', >> > >>>>>> '--download-hdf5=1', >> > >>>>>> '--download-metis', >> > >>>>>> '--download-superlu', >> > >>>>>> '--download-superlu_dist', >> > >>>>>> '--with-make-np=16', >> > >>>>>> # '--with-hwloc=0', >> > >>>>>> '--download-parmetis', >> > >>>>>> # '--download-hypre', >> > >>>>>> '--download-triangle', >> > >>>>>> # '--download-amgx', >> > >>>>>> # '--download-fblaslapack', >> > >>>>>> '--with-blaslapack-lib=-L' + >> > >>>>>> os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack', >> > >>>>>> '--with-cc=mpicc', >> > >>>>>> # '--with-fc=mpif90', >> > >>>>>> '--with-shared-libraries=1', >> > >>>>>> # '--known-mpi-shared-libraries=1', >> > >>>>>> '--with-x=0', >> > >>>>>> '--with-64-bit-indices=0', >> > >>>>>> '--with-debugging=0', >> > >>>>>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', >> > >>>>>> '--with-openmp=1', >> > >>>>>> '--with-threadsaftey=1', >> > >>>>>> '--with-log=1' >> > >>>>>> ] >> > >>>>>> configure.petsc_configure(configure_options) >> > >>>>>> >> > >>>>>> >> > >>>>>> >> > >>>>>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay >> > >>>>>> wrote: >> > >>>>>> >> > >>>>>>> The crash is inside Superlu_DIST - so don't know what to suggest. >> > >>>>>>> >> > >>>>>>> Might have to debug this via debugger and check with Sherry. >> > >>>>>>> >> > >>>>>>> Satish >> > >>>>>>> >> > >>>>>>> On Wed, 15 Apr 2020, Mark Adams wrote: >> > >>>>>>> >> > >>>>>>> > Ah, OK 'check' will test SuperLU. Semi worked: >> > >>>>>>> > >> > >>>>>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make >> > >>>>>>> > PETSC_DIR=/ccs/home/adams/petsc >> > >>>>>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >> > >>>>>>> > check >> > >>>>>>> > Running check examples to verify correct installation >> > >>>>>>> > Using PETSC_DIR=/ccs/home/adams/petsc and >> > >>>>>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp >> > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI >> > >>>>>>> process >> > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI >> > >>>>>>> processes >> > >>>>>>> > 2c2,38 >> > >>>>>>> > < Number of SNES iterations = 2 >> > >>>>>>> > --- >> > >>>>>>> > > CUDA version: v 10010 >> > >>>>>>> > > CUDA Devices: >> > >>>>>>> > > >> > >>>>>>> > > 0 : Tesla V100-SXM2-16GB 7 0 >> > >>>>>>> > > Global memory: 16128 mb >> > >>>>>>> > > Shared memory: 48 kb >> > >>>>>>> > > Constant memory: 64 kb >> > >>>>>>> > > Block registers: 65536 >> > >>>>>>> > > >> > >>>>>>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): >> > >>>>>>> Assertion >> > >>>>>>> > `cacheNode != __null' failed. >> > >>>>>>> > > [h16n07:78357] *** Process received signal *** >> > >>>>>>> > > [h16n07:78357] Signal: Aborted (6) >> > >>>>>>> > > [h16n07:78357] Signal code: (1704218624) >> > >>>>>>> > > [h16n07:78357] [ 0] [0x2000000504d8] >> > >>>>>>> > > [h16n07:78357] [ 1] /lib64/libc.so.6(abort+0x2b4)[0x200023992094] >> > >>>>>>> > > [h16n07:78357] [ 2] /lib64/libc.so.6(+0x356d4)[0x2000239856d4] >> > >>>>>>> > > [h16n07:78357] [ 3] >> > >>>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] >> > >>>>>>> > > [h16n07:78357] [ 4] >> > >>>>>>> > >> > >>>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] >> > >>>>>>> > > [h16n07:78357] [ 5] >> > >>>>>>> > >> > >>>>>>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] >> > >>>>>>> > > [h16n07:78357] [ 6] >> > >>>>>>> > >> > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] >> > >>>>>>> > > [h16n07:78357] [ 7] >> > >>>>>>> > >> > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] >> > >>>>>>> > > [h16n07:78357] [ 8] >> > >>>>>>> > >> > >>>>>>> /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] >> > >>>>>>> > > [h16n07:78357] [ 9] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] >> > >>>>>>> > > [h16n07:78357] [10] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] >> > >>>>>>> > > [h16n07:78357] [11] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] >> > >>>>>>> > > [h16n07:78357] [12] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] >> > >>>>>>> > > [h16n07:78357] [13] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] >> > >>>>>>> > > [h16n07:78357] [14] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] >> > >>>>>>> > > [h16n07:78357] [15] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] >> > >>>>>>> > > [h16n07:78357] [16] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] >> > >>>>>>> > > [h16n07:78357] [17] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] >> > >>>>>>> > > [h16n07:78357] [18] >> > >>>>>>> > >> > >>>>>>> /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] >> > >>>>>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] >> > >>>>>>> > > [h16n07:78357] [20] /lib64/libc.so.6(+0x25200)[0x200023975200] >> > >>>>>>> > > [h16n07:78357] [21] >> > >>>>>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] >> > >>>>>>> > > [h16n07:78357] *** End of error message *** >> > >>>>>>> > > ERROR: One or more process (first noticed rank 0) terminated >> > >>>>>>> with signal >> > >>>>>>> > 6 >> > >>>>>>> > /ccs/home/adams/petsc/src/snes/tutorials >> > >>>>>>> > Possible problem with ex19 running with superlu_dist, diffs above >> > >>>>>>> > ========================================= >> > >>>>>>> > >> > >>>>>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay >> > >>>>>>> wrote: >> > >>>>>>> > >> > >>>>>>> > > Please send configure.log >> > >>>>>>> > > >> > >>>>>>> > > This is what I get on my linux build: >> > >>>>>>> > > >> > >>>>>>> > > [balay at p1 petsc]$ ./configure >> > >>>>>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda --with-cuda=1 >> > >>>>>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make check >> > >>>>>>> > > >> > >>>>>>> > > Running check examples to verify correct installation >> > >>>>>>> > > Using PETSC_DIR=/home/balay/petsc and >> > >>>>>>> PETSC_ARCH=arch-linux-c-debug >> > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 >> > >>>>>>> MPI process >> > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 >> > >>>>>>> MPI processes >> > >>>>>>> > > 1a2,19 >> > >>>>>>> > > > CUDA version: v 10020 >> > >>>>>>> > > > CUDA Devices: >> > >>>>>>> > > > >> > >>>>>>> > > > 0 : Quadro T2000 7 5 >> > >>>>>>> > > > Global memory: 3911 mb >> > >>>>>>> > > > Shared memory: 48 kb >> > >>>>>>> > > > Constant memory: 64 kb >> > >>>>>>> > > > Block registers: 65536 >> > >>>>>>> > > > >> > >>>>>>> > > > CUDA version: v 10020 >> > >>>>>>> > > > CUDA Devices: >> > >>>>>>> > > > >> > >>>>>>> > > > 0 : Quadro T2000 7 5 >> > >>>>>>> > > > Global memory: 3911 mb >> > >>>>>>> > > > Shared memory: 48 kb >> > >>>>>>> > > > Constant memory: 64 kb >> > >>>>>>> > > > Block registers: 65536 >> > >>>>>>> > > > >> > >>>>>>> > > /home/balay/petsc/src/snes/tutorials >> > >>>>>>> > > Possible problem with ex19 running with superlu_dist, diffs above >> > >>>>>>> > > ========================================= >> > >>>>>>> > > Fortran example src/snes/tutorials/ex5f run successfully with 1 >> > >>>>>>> MPI process >> > >>>>>>> > > Completed test examples >> > >>>>>>> > > >> > >>>>>>> > > >> > >>>>>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: >> > >>>>>>> > > >> > >>>>>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay < >> > >>>>>>> balay at mcs.anl.gov> wrote: >> > >>>>>>> > > > >> > >>>>>>> > > > > The build should work. It should give some verbose info [at >> > >>>>>>> runtime] >> > >>>>>>> > > > > regarding GPUs - from the following code. >> > >>>>>>> > > > > >> > >>>>>>> > > > > >> > >>>>>>> > > > I don't see that and I am running GPUs in my code and have >> > >>>>>>> gotten >> > >>>>>>> > > cusparse >> > >>>>>>> > > > LU to run. Should I use '-info :sys:' ? >> > >>>>>>> > > > >> > >>>>>>> > > > >> > >>>>>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> >> > >>>>>>> > > > > void DisplayHeader() >> > >>>>>>> > > > > { >> > >>>>>>> > > > > const int kb = 1024; >> > >>>>>>> > > > > const int mb = kb * kb; >> > >>>>>>> > > > > // cout << "NBody.GPU" << endl << "=========" << endl << >> > >>>>>>> endl; >> > >>>>>>> > > > > >> > >>>>>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); >> > >>>>>>> > > > > //cout << "Thrust version: v" << THRUST_MAJOR_VERSION << >> > >>>>>>> "." << >> > >>>>>>> > > > > THRUST_MINOR_VERSION << endl << endl; >> > >>>>>>> > > > > >> > >>>>>>> > > > > int devCount; >> > >>>>>>> > > > > cudaGetDeviceCount(&devCount); >> > >>>>>>> > > > > printf( "CUDA Devices: \n \n"); >> > >>>>>>> > > > > >> > >>>>>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< >> > >>>>>>> > > > > >> > >>>>>>> > > > > Satish >> > >>>>>>> > > > > >> > >>>>>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: >> > >>>>>>> > > > > >> > >>>>>>> > > > > > I remember Barry said superlu gpu support is broken. >> > >>>>>>> > > > > > --Junchao Zhang >> > >>>>>>> > > > > > >> > >>>>>>> > > > > > >> > >>>>>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams < >> > >>>>>>> mfadams at lbl.gov> wrote: >> > >>>>>>> > > > > > >> > >>>>>>> > > > > > > How does one use SuperLU with GPUs. I don't seem to get >> > >>>>>>> any GPU >> > >>>>>>> > > > > > > performance data so I assume GPUs are not getting turned >> > >>>>>>> on. Am I >> > >>>>>>> > > wrong >> > >>>>>>> > > > > > > about that? >> > >>>>>>> > > > > > > >> > >>>>>>> > > > > > > I configure with: >> > >>>>>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC >> > >>>>>>> -fopenmp" >> > >>>>>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g >> > >>>>>>> -O2 -fPIC >> > >>>>>>> > > > > -fopenmp" >> > >>>>>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 >> > >>>>>>> > > --with-cxx=mpicxx >> > >>>>>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 >> > >>>>>>> --with-cudac=nvcc >> > >>>>>>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 >> > >>>>>>> > > --download-metis >> > >>>>>>> > > > > > > --download-superlu --download-superlu_dist >> > >>>>>>> --with-make-np=16 >> > >>>>>>> > > > > > > --download-parmetis --download-triangle >> > >>>>>>> > > > > > > >> > >>>>>>> > > > > >> > >>>>>>> > > >> > >>>>>>> --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 >> > >>>>>>> > > > > > > -lblas -llapack" --with-cc=mpicc >> > >>>>>>> --with-shared-libraries=1 >> > >>>>>>> > > --with-x=0 >> > >>>>>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 >> > >>>>>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp --with-openmp=1 >> > >>>>>>> > > > > > > --with-threadsaftey=1 --with-log=1 >> > >>>>>>> > > > > > > >> > >>>>>>> > > > > > > Thanks, >> > >>>>>>> > > > > > > Mark >> > >>>>>>> > > > > > > >> > >>>>>>> > > > > > >> > >>>>>>> > > > > >> > >>>>>>> > > > > >> > >>>>>>> > > > >> > >>>>>>> > > >> > >>>>>>> > > >> > >>>>>>> > >> > >>>>>>> >> > >>>>>>> >> > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sun Apr 19 10:45:49 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 19 Apr 2020 10:45:49 -0500 (CDT) Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: > *[0]PETSC ERROR: Could not locate solver package superlu for factorization Here you are requesting 'superlu' - instead of 'superlu_dist' - hence this error. Satish On Sun, 19 Apr 2020, Mark Adams wrote: > > > > > > > > > > --download-superlu --download-superlu_dist > > > > You are installing with both superlu and superlu_dist. To verify - remove > > superlu - and keep only superlu_dist > > > > I tried this earlier. Here is the error message: > > 0 SNES Function norm 1.511918966798e-02 > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: See > https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for > possible LU and Cholesky solvers > > *[0]PETSC ERROR: Could not locate solver package superlu for factorization > type LU and matrix type seqaij. Perhaps you must ./configure with > --download-superlu*[0]PETSC ERROR: See > https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.13-163-g4c71feb GIT > Date: 2020-04-18 15:35:50 -0400 > [0]PETSC ERROR: ./ex112d on a arch-summit-opt-gnu-cuda-omp-2db named h23n05 > by adams Sun Apr 19 11:39:05 2020 > [0]PETSC ERROR: Configure options --with-fc=0 --COPTFLAGS="-g -O2 -fPIC > -fopenmp -DFP_DIM=2" --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g > -O2 -fPIC -fopenmp" --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > --with-cxx=mpicxx --with-mpiexec="jsrun -g1" --with-cuda=1 > --with-cudac=nvcc --download-p4est=1 --download-zlib --download-hdf5=1 > --download-metis --download-superlu_dist --with-make-np=16 > --download-parmetis --download-triangle > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 > --with-64-bit-indices=0 --with-debugging=0 > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp-2db --with-openmp=1 > --with-threadsaftey=1 --with-log=1 > [0]PETSC ERROR: #1 MatGetFactor() line 4490 in > /autofs/nccs-svm1_home1/adams/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: #2 PCSetUp_LU() line 88 in > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: #3 PCSetUp() line 894 in > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #4 KSPSetUp() line 376 in > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #5 KSPSolve_Private() line 633 in > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #6 KSPSolve() line 853 in > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #7 SNESSolve_NEWTONLS() line 225 in > /autofs/nccs-svm1_home1/adams/petsc/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #8 SNESSolve() line 4520 in > /autofs/nccs-svm1_home1/adams/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: #9 TSStep_ARKIMEX() line 811 in > /autofs/nccs-svm1_home1/adams/petsc/src/ts/impls/arkimex/arkimex.c > [0]PETSC ERROR: #10 TSStep() line 3721 in > /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #11 TSSolve() line 4127 in > /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #12 main() line 955 in ex11.c > > > > > > Satish > > > > > > > > > > > > > > > > > > SuperLU: > > > > Version: 5.2.1 > > > > Includes: > > -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > > > > Library: > > -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > > > > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > > > > > > > which is serial superlu, not superlu_dist. These are 2 different > > codes. > > > > > > > > Sherry > > > > > > > > On Sat, Apr 18, 2020 at 4:54 PM Mark Adams wrote: > > > > > > > >> > > > >> > > > >> On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li wrote: > > > >> > > > >>> Mark, > > > >>> > > > >>> It seems you are talking about serial superlu? There is no GPU > > support > > > >>> in it. Only superlu_dist has GPU. > > > >>> > > > >> > > > >> I am using superlu_dist on one processor. Should that work? > > > >> > > > >> > > > >>> > > > >>> But I don't know why there is a crash. > > > >>> > > > >>> Sherry > > > >>> > > > >>> On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote: > > > >>> > > > >>>> Sherry, I did rebase with master this week: > > > >>>> > > > >>>> SuperLU: > > > >>>> Version: 5.2.1 > > > >>>> Includes: > > -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > > > >>>> Library: > > > >>>> -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > > > >>>> -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > > >>>> > > > >>>> I see the same thing with a debug build. > > > >>>> > > > >>>> If anyone is interested in looking at this, I was also able to see > > that > > > >>>> plex/ex10 in my branch, which is a very simple test , also does not > > crash > > > >>>> and also does not seem to use GPUs in SuperLU. > > > >>>> > > > >>>> > > > >>>> On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li wrote: > > > >>>> > > > >>>>> When you install "-download-superlu_dist", that is from 'master' > > > >>>>> branch? > > > >>>>> > > > >>>>> In the error trace, I recognized this: > > > >>>>> > > > >>>>> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- > > > >>>>> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ > > > >>>>> LU+0xc4)[0x20000195aff4] > > > >>>>> > > > >>>>> This is to free the L and U data structures at the end of the > > program. > > > >>>>> > > > >>>>> Sherry > > > >>>>> > > > >>>>> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams > > wrote: > > > >>>>> > > > >>>>>> Back to SuperLU + GPUs (adding Sherry) > > > >>>>>> > > > >>>>>> I get this error (appended) running 'check', as I said before. It > > > >>>>>> looks like ex19 is *failing* with CUDA but it is not clear it has > > > >>>>>> anything to do with SuperLU. I can not find these diagnostics > > that got > > > >>>>>> printed after the error in PETSc or SuperLU. > > > >>>>>> > > > >>>>>> So this is a problem, but moving on to my code (plex/ex11 in > > > >>>>>> mark/feature-xgc-interface-rebase-v2, configure script appended). > > It runs. > > > >>>>>> I use superlu and GPUs, but they do not seem to be used in > > SuperLU: > > > >>>>>> > > > >>>>>> > > > >>>>>> > > ------------------------------------------------------------------------------------------------------------------------ > > > >>>>>> Event Count Time (sec) Flop > > > >>>>>> --- Global --- --- Stage ---- Total GPU - > > CpuToGpu - - > > > >>>>>> GpuToCpu - GPU > > > >>>>>> Max Ratio Max Ratio Max Ratio Mess > > > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s > > Count Size > > > >>>>>> Count Size %F > > > >>>>>> > > > >>>>>> > > --------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > >>>>>> .... > > > >>>>>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 0.0e+00 > > > >>>>>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 > > *0 > > > >>>>>> 0.00e+00 0 0.00e+00 0* > > > >>>>>> > > > >>>>>> No CUDA version. The times are the same and no GPU > > > >>>>>> communication above. So SuperLU does not seem to be using GPUs. > > > >>>>>> > > > >>>>>> > > > >>>>>> > > ------------------------------------------------------------------------------------------------------------------------ > > > >>>>>> Event Count Time (sec) Flop > > > >>>>>> --- Global --- --- Stage ---- Total > > > >>>>>> Max Ratio Max Ratio Max Ratio Mess > > > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > > >>>>>> > > > >>>>>> > > ------------------------------------------------------------------------------------------------------------------------ > > > >>>>>> .... > > > >>>>>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 0.0e+00 > > > >>>>>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 > > > >>>>>> > > > >>>>>> There are some differences: ex19 use DMDA and I use DMPlex, > > 'check' > > > >>>>>> is run in my home directory, where files can not be written, and > > I run my > > > >>>>>> code in the project areas. > > > >>>>>> > > > >>>>>> The timings are different without superlu so I think superlu is > > being > > > >>>>>> used. THis is how I run this (w and w/o -mat_superlu_equil > > -dm_mat_type > > > >>>>>> sell) > > > >>>>>> > > > >>>>>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view > > > >>>>>> hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type spitzer > > -Ez 0 > > > >>>>>> -petscspace_degree 2 -mass_petscspace_degree 2 > > -petscspace_poly_tensor 1 > > > >>>>>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 > > -ion_charges 2 > > > >>>>>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor -ts_adapt_monitor > > > >>>>>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor > > -snes_converged_reason > > > >>>>>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover > > > >>>>>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 > > -ts_dt 1e-1 > > > >>>>>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min 2e-2 > > > >>>>>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed > > 0.75 > > > >>>>>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type preonly > > > >>>>>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 > > -amr_z_refine2 0 > > > >>>>>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 -z_radius1 8 > > -z_radius2 > > > >>>>>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time 2600 > > > >>>>>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info > > :dm,tsadapt: > > > >>>>>> -sub_thread_block_size 4 -options_left -log_view > > -pc_factor_mat_solver_type > > > >>>>>> superlu -mat_superlu_equil -dm_mat_type sell > > > >>>>>> > > > >>>>>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs > > turned > > > >>>>>> on in SuperLU. > > > >>>>>> Thoughts? > > > >>>>>> > > > >>>>>> Thanks, > > > >>>>>> Mark > > > >>>>>> > > > >>>>>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make > > > >>>>>> PETSC_DIR=/ccs/home/adams/petsc > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > > >>>>>> check > > > >>>>>> Running check examples to verify correct installation > > > >>>>>> Using PETSC_DIR=/ccs/home/adams/petsc and > > > >>>>>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI > > > >>>>>> process > > > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI > > > >>>>>> processes > > > >>>>>> 2c2,39 > > > >>>>>> < Number of SNES iterations = 2 > > > >>>>>> --- > > > >>>>>> > > > >>>>>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): > > > >>>>>> Assertion `cacheNode != __null' failed.*> [h50n09:102287] *** > > > >>>>>> Process received signal *** > > > >>>>>> > CUDA version: v 10010 > > > >>>>>> > CUDA Devices: > > > >>>>>> > > > > >>>>>> > 0 : Tesla V100-SXM2-16GB 7 0 > > > >>>>>> > Global memory: 16128 mb > > > >>>>>> > Shared memory: 48 kb > > > >>>>>> > Constant memory: 64 kb > > > >>>>>> > Block registers: 65536 > > > >>>>>> > > > > >>>>>> > [h50n09:102287] Signal: Aborted (6) > > > >>>>>> > [h50n09:102287] Associated errno: Unknown error 1072693248 > > > >>>>>> (1072693248) > > > >>>>>> > [h50n09:102287] Signal code: User function (kill, sigsend, > > abort, > > > >>>>>> etc.) (0) > > > >>>>>> > [h50n09:102287] [ 0] [0x2000000504d8] > > > >>>>>> > [h50n09:102287] [ 1] > > /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] > > > >>>>>> > [h50n09:102287] [ 2] /lib64/libc.so.6(+0x356d4)[0x200021be56d4] > > > >>>>>> > [h50n09:102287] [ 3] > > > >>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] > > > >>>>>> > [h50n09:102287] [ 4] > > > >>>>>> > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > > >>>>>> > [h50n09:102287] [ 5] > > > >>>>>> > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > > >>>>>> > [h50n09:102287] [ 6] > > > >>>>>> > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] > > > >>>>>> > [h50n09:102287] [ 7] > > > >>>>>> > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] > > > >>>>>> > [h50n09:102287] [ 8] > > > >>>>>> > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] > > > >>>>>> > [h50n09:102287] [ 9] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] > > > >>>>>> > [h50n09:102287] [10] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] > > > >>>>>> > [h50n09:102287] [11] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] > > > >>>>>> > [h50n09:102287] [12] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] > > > >>>>>> > [h50n09:102287] [13] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] > > > >>>>>> > [h50n09:102287] [14] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] > > > >>>>>> > [h50n09:102287] [15] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] > > > >>>>>> > [h50n09:102287] [16] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] > > > >>>>>> > [h50n09:102287] [17] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] > > > >>>>>> > [h50n09:102287] [18] > > > >>>>>> > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] > > > >>>>>> > [h50n09:102287] [19] ./ex19[0x10001a6c] > > > >>>>>> > [h50n09:102287] [20] /lib64/libc.so.6(+0x25200)[0x200021bd5200] > > > >>>>>> > [h50n09:102287] [21] > > > >>>>>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] > > > >>>>>> > [h50n09:102287] *** End of error message *** > > > >>>>>> > ERROR: One or more process (first noticed rank 0) terminated > > with > > > >>>>>> signal 6 > > > >>>>>> /ccs/home/adams/petsc/src/snes/tutorials > > > >>>>>> Possible problem with ex19 running with superlu_dist, diffs above > > > >>>>>> > > > >>>>>> > > > >>>>>> > > > >>>>>> > > > >>>>>> #!/usr/bin/env python > > > >>>>>> if __name__ == '__main__': > > > >>>>>> import sys > > > >>>>>> import os > > > >>>>>> sys.path.insert(0, os.path.abspath('config')) > > > >>>>>> import configure > > > >>>>>> configure_options = [ > > > >>>>>> '--with-fc=0', > > > >>>>>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', > > > >>>>>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', > > > >>>>>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', > > > >>>>>> '--CUDAOPTFLAGS=-O2 -g', > > > >>>>>> '--with-ssl=0', > > > >>>>>> '--with-batch=0', > > > >>>>>> '--with-cxx=mpicxx', > > > >>>>>> '--with-mpiexec=jsrun -g1', > > > >>>>>> '--with-cuda=1', > > > >>>>>> '--with-cudac=nvcc', > > > >>>>>> '--download-p4est=1', > > > >>>>>> '--download-zlib', > > > >>>>>> '--download-hdf5=1', > > > >>>>>> '--download-metis', > > > >>>>>> '--download-superlu', > > > >>>>>> '--download-superlu_dist', > > > >>>>>> '--with-make-np=16', > > > >>>>>> # '--with-hwloc=0', > > > >>>>>> '--download-parmetis', > > > >>>>>> # '--download-hypre', > > > >>>>>> '--download-triangle', > > > >>>>>> # '--download-amgx', > > > >>>>>> # '--download-fblaslapack', > > > >>>>>> '--with-blaslapack-lib=-L' + > > > >>>>>> os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas -llapack', > > > >>>>>> '--with-cc=mpicc', > > > >>>>>> # '--with-fc=mpif90', > > > >>>>>> '--with-shared-libraries=1', > > > >>>>>> # '--known-mpi-shared-libraries=1', > > > >>>>>> '--with-x=0', > > > >>>>>> '--with-64-bit-indices=0', > > > >>>>>> '--with-debugging=0', > > > >>>>>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', > > > >>>>>> '--with-openmp=1', > > > >>>>>> '--with-threadsaftey=1', > > > >>>>>> '--with-log=1' > > > >>>>>> ] > > > >>>>>> configure.petsc_configure(configure_options) > > > >>>>>> > > > >>>>>> > > > >>>>>> > > > >>>>>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay > > > >>>>>> wrote: > > > >>>>>> > > > >>>>>>> The crash is inside Superlu_DIST - so don't know what to suggest. > > > >>>>>>> > > > >>>>>>> Might have to debug this via debugger and check with Sherry. > > > >>>>>>> > > > >>>>>>> Satish > > > >>>>>>> > > > >>>>>>> On Wed, 15 Apr 2020, Mark Adams wrote: > > > >>>>>>> > > > >>>>>>> > Ah, OK 'check' will test SuperLU. Semi worked: > > > >>>>>>> > > > > >>>>>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make > > > >>>>>>> > PETSC_DIR=/ccs/home/adams/petsc > > > >>>>>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > > >>>>>>> > check > > > >>>>>>> > Running check examples to verify correct installation > > > >>>>>>> > Using PETSC_DIR=/ccs/home/adams/petsc and > > > >>>>>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 > > MPI > > > >>>>>>> process > > > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 > > MPI > > > >>>>>>> processes > > > >>>>>>> > 2c2,38 > > > >>>>>>> > < Number of SNES iterations = 2 > > > >>>>>>> > --- > > > >>>>>>> > > CUDA version: v 10010 > > > >>>>>>> > > CUDA Devices: > > > >>>>>>> > > > > > >>>>>>> > > 0 : Tesla V100-SXM2-16GB 7 0 > > > >>>>>>> > > Global memory: 16128 mb > > > >>>>>>> > > Shared memory: 48 kb > > > >>>>>>> > > Constant memory: 64 kb > > > >>>>>>> > > Block registers: 65536 > > > >>>>>>> > > > > > >>>>>>> > > ex19: cudahook.cc:762: CUresult host_free_callback(void*): > > > >>>>>>> Assertion > > > >>>>>>> > `cacheNode != __null' failed. > > > >>>>>>> > > [h16n07:78357] *** Process received signal *** > > > >>>>>>> > > [h16n07:78357] Signal: Aborted (6) > > > >>>>>>> > > [h16n07:78357] Signal code: (1704218624) > > > >>>>>>> > > [h16n07:78357] [ 0] [0x2000000504d8] > > > >>>>>>> > > [h16n07:78357] [ 1] > > /lib64/libc.so.6(abort+0x2b4)[0x200023992094] > > > >>>>>>> > > [h16n07:78357] [ 2] > > /lib64/libc.so.6(+0x356d4)[0x2000239856d4] > > > >>>>>>> > > [h16n07:78357] [ 3] > > > >>>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] > > > >>>>>>> > > [h16n07:78357] [ 4] > > > >>>>>>> > > > > >>>>>>> > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > > >>>>>>> > > [h16n07:78357] [ 5] > > > >>>>>>> > > > > >>>>>>> > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > > >>>>>>> > > [h16n07:78357] [ 6] > > > >>>>>>> > > > > >>>>>>> > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] > > > >>>>>>> > > [h16n07:78357] [ 7] > > > >>>>>>> > > > > >>>>>>> > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] > > > >>>>>>> > > [h16n07:78357] [ 8] > > > >>>>>>> > > > > >>>>>>> > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] > > > >>>>>>> > > [h16n07:78357] [ 9] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] > > > >>>>>>> > > [h16n07:78357] [10] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] > > > >>>>>>> > > [h16n07:78357] [11] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] > > > >>>>>>> > > [h16n07:78357] [12] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] > > > >>>>>>> > > [h16n07:78357] [13] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] > > > >>>>>>> > > [h16n07:78357] [14] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] > > > >>>>>>> > > [h16n07:78357] [15] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] > > > >>>>>>> > > [h16n07:78357] [16] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] > > > >>>>>>> > > [h16n07:78357] [17] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] > > > >>>>>>> > > [h16n07:78357] [18] > > > >>>>>>> > > > > >>>>>>> > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] > > > >>>>>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] > > > >>>>>>> > > [h16n07:78357] [20] > > /lib64/libc.so.6(+0x25200)[0x200023975200] > > > >>>>>>> > > [h16n07:78357] [21] > > > >>>>>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] > > > >>>>>>> > > [h16n07:78357] *** End of error message *** > > > >>>>>>> > > ERROR: One or more process (first noticed rank 0) terminated > > > >>>>>>> with signal > > > >>>>>>> > 6 > > > >>>>>>> > /ccs/home/adams/petsc/src/snes/tutorials > > > >>>>>>> > Possible problem with ex19 running with superlu_dist, diffs > > above > > > >>>>>>> > ========================================= > > > >>>>>>> > > > > >>>>>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay < > > balay at mcs.anl.gov> > > > >>>>>>> wrote: > > > >>>>>>> > > > > >>>>>>> > > Please send configure.log > > > >>>>>>> > > > > > >>>>>>> > > This is what I get on my linux build: > > > >>>>>>> > > > > > >>>>>>> > > [balay at p1 petsc]$ ./configure > > > >>>>>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda > > --with-cuda=1 > > > >>>>>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make > > check > > > >>>>>>> > > > > > >>>>>>> > > Running check examples to verify correct installation > > > >>>>>>> > > Using PETSC_DIR=/home/balay/petsc and > > > >>>>>>> PETSC_ARCH=arch-linux-c-debug > > > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 > > > >>>>>>> MPI process > > > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 > > > >>>>>>> MPI processes > > > >>>>>>> > > 1a2,19 > > > >>>>>>> > > > CUDA version: v 10020 > > > >>>>>>> > > > CUDA Devices: > > > >>>>>>> > > > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > > > >>>>>>> > > > Global memory: 3911 mb > > > >>>>>>> > > > Shared memory: 48 kb > > > >>>>>>> > > > Constant memory: 64 kb > > > >>>>>>> > > > Block registers: 65536 > > > >>>>>>> > > > > > > >>>>>>> > > > CUDA version: v 10020 > > > >>>>>>> > > > CUDA Devices: > > > >>>>>>> > > > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > > > >>>>>>> > > > Global memory: 3911 mb > > > >>>>>>> > > > Shared memory: 48 kb > > > >>>>>>> > > > Constant memory: 64 kb > > > >>>>>>> > > > Block registers: 65536 > > > >>>>>>> > > > > > > >>>>>>> > > /home/balay/petsc/src/snes/tutorials > > > >>>>>>> > > Possible problem with ex19 running with superlu_dist, diffs > > above > > > >>>>>>> > > ========================================= > > > >>>>>>> > > Fortran example src/snes/tutorials/ex5f run successfully > > with 1 > > > >>>>>>> MPI process > > > >>>>>>> > > Completed test examples > > > >>>>>>> > > > > > >>>>>>> > > > > > >>>>>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: > > > >>>>>>> > > > > > >>>>>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay < > > > >>>>>>> balay at mcs.anl.gov> wrote: > > > >>>>>>> > > > > > > >>>>>>> > > > > The build should work. It should give some verbose info > > [at > > > >>>>>>> runtime] > > > >>>>>>> > > > > regarding GPUs - from the following code. > > > >>>>>>> > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > I don't see that and I am running GPUs in my code and have > > > >>>>>>> gotten > > > >>>>>>> > > cusparse > > > >>>>>>> > > > LU to run. Should I use '-info :sys:' ? > > > >>>>>>> > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > > > >>>>>>> > > > > void DisplayHeader() > > > >>>>>>> > > > > { > > > >>>>>>> > > > > const int kb = 1024; > > > >>>>>>> > > > > const int mb = kb * kb; > > > >>>>>>> > > > > // cout << "NBody.GPU" << endl << "=========" << > > endl << > > > >>>>>>> endl; > > > >>>>>>> > > > > > > > >>>>>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > > > >>>>>>> > > > > //cout << "Thrust version: v" << > > THRUST_MAJOR_VERSION << > > > >>>>>>> "." << > > > >>>>>>> > > > > THRUST_MINOR_VERSION << endl << endl; > > > >>>>>>> > > > > > > > >>>>>>> > > > > int devCount; > > > >>>>>>> > > > > cudaGetDeviceCount(&devCount); > > > >>>>>>> > > > > printf( "CUDA Devices: \n \n"); > > > >>>>>>> > > > > > > > >>>>>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > > >>>>>>> > > > > > > > >>>>>>> > > > > Satish > > > >>>>>>> > > > > > > > >>>>>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > > >>>>>>> > > > > > > > >>>>>>> > > > > > I remember Barry said superlu gpu support is broken. > > > >>>>>>> > > > > > --Junchao Zhang > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams < > > > >>>>>>> mfadams at lbl.gov> wrote: > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > How does one use SuperLU with GPUs. I don't seem to > > get > > > >>>>>>> any GPU > > > >>>>>>> > > > > > > performance data so I assume GPUs are not getting > > turned > > > >>>>>>> on. Am I > > > >>>>>>> > > wrong > > > >>>>>>> > > > > > > about that? > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > > I configure with: > > > >>>>>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g -O2 > > -fPIC > > > >>>>>>> -fopenmp" > > > >>>>>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g > > > >>>>>>> -O2 -fPIC > > > >>>>>>> > > > > -fopenmp" > > > >>>>>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > > > >>>>>>> > > --with-cxx=mpicxx > > > >>>>>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 > > > >>>>>>> --with-cudac=nvcc > > > >>>>>>> > > > > > > --download-p4est=1 --download-zlib --download-hdf5=1 > > > >>>>>>> > > --download-metis > > > >>>>>>> > > > > > > --download-superlu --download-superlu_dist > > > >>>>>>> --with-make-np=16 > > > >>>>>>> > > > > > > --download-parmetis --download-triangle > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > > > >>>>>>> > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > > >>>>>>> > > > > > > -lblas -llapack" --with-cc=mpicc > > > >>>>>>> --with-shared-libraries=1 > > > >>>>>>> > > --with-x=0 > > > >>>>>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 > > > >>>>>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > --with-openmp=1 > > > >>>>>>> > > > > > > --with-threadsaftey=1 --with-log=1 > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > > Thanks, > > > >>>>>>> > > > > > > Mark > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > > >>>>>>> > > > > > >>>>>>> > > > > >>>>>>> > > > >>>>>>> > > > > > > > > From mfadams at lbl.gov Sun Apr 19 10:51:59 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 19 Apr 2020 11:51:59 -0400 Subject: [petsc-users] SuperLU + GPUs In-Reply-To: References: Message-ID: Ahhh, thanks, OK, now I am able to reproduce the error in the test. I can work on that, Thanks again, On Sun, Apr 19, 2020 at 11:45 AM Satish Balay wrote: > > *[0]PETSC ERROR: Could not locate solver package superlu for > factorization > > Here you are requesting 'superlu' - instead of 'superlu_dist' - hence this > error. > > Satish > > On Sun, 19 Apr 2020, Mark Adams wrote: > > > > > > > > > > > > > > > --download-superlu --download-superlu_dist > > > > > > You are installing with both superlu and superlu_dist. To verify - > remove > > > superlu - and keep only superlu_dist > > > > > > > I tried this earlier. Here is the error message: > > > > 0 SNES Function norm 1.511918966798e-02 > > [0]PETSC ERROR: --------------------- Error Message > > -------------------------------------------------------------- > > [0]PETSC ERROR: See > > https://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for > > possible LU and Cholesky solvers > > > > *[0]PETSC ERROR: Could not locate solver package superlu for > factorization > > type LU and matrix type seqaij. Perhaps you must ./configure with > > --download-superlu*[0]PETSC ERROR: See > > https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting. > > [0]PETSC ERROR: Petsc Development GIT revision: v3.13-163-g4c71feb GIT > > Date: 2020-04-18 15:35:50 -0400 > > [0]PETSC ERROR: ./ex112d on a arch-summit-opt-gnu-cuda-omp-2db named > h23n05 > > by adams Sun Apr 19 11:39:05 2020 > > [0]PETSC ERROR: Configure options --with-fc=0 --COPTFLAGS="-g -O2 -fPIC > > -fopenmp -DFP_DIM=2" --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" > --FOPTFLAGS="-g > > -O2 -fPIC -fopenmp" --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 --with-batch=0 > > --with-cxx=mpicxx --with-mpiexec="jsrun -g1" --with-cuda=1 > > --with-cudac=nvcc --download-p4est=1 --download-zlib --download-hdf5=1 > > --download-metis --download-superlu_dist --with-make-np=16 > > --download-parmetis --download-triangle > > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > -lblas -llapack" --with-cc=mpicc --with-shared-libraries=1 --with-x=0 > > --with-64-bit-indices=0 --with-debugging=0 > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp-2db --with-openmp=1 > > --with-threadsaftey=1 --with-log=1 > > [0]PETSC ERROR: #1 MatGetFactor() line 4490 in > > /autofs/nccs-svm1_home1/adams/petsc/src/mat/interface/matrix.c > > [0]PETSC ERROR: #2 PCSetUp_LU() line 88 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/impls/factor/lu/lu.c > > [0]PETSC ERROR: #3 PCSetUp() line 894 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: #4 KSPSetUp() line 376 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: #5 KSPSolve_Private() line 633 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: #6 KSPSolve() line 853 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: #7 SNESSolve_NEWTONLS() line 225 in > > /autofs/nccs-svm1_home1/adams/petsc/src/snes/impls/ls/ls.c > > [0]PETSC ERROR: #8 SNESSolve() line 4520 in > > /autofs/nccs-svm1_home1/adams/petsc/src/snes/interface/snes.c > > [0]PETSC ERROR: #9 TSStep_ARKIMEX() line 811 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ts/impls/arkimex/arkimex.c > > [0]PETSC ERROR: #10 TSStep() line 3721 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c > > [0]PETSC ERROR: #11 TSSolve() line 4127 in > > /autofs/nccs-svm1_home1/adams/petsc/src/ts/interface/ts.c > > [0]PETSC ERROR: #12 main() line 955 in ex11.c > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > SuperLU: > > > > > Version: 5.2.1 > > > > > Includes: > > > -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > > > > > Library: > > > -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > > > > > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu > > > > > > > > > > which is serial superlu, not superlu_dist. These are 2 different > > > codes. > > > > > > > > > > Sherry > > > > > > > > > > On Sat, Apr 18, 2020 at 4:54 PM Mark Adams > wrote: > > > > > > > > > >> > > > > >> > > > > >> On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li > wrote: > > > > >> > > > > >>> Mark, > > > > >>> > > > > >>> It seems you are talking about serial superlu? There is no GPU > > > support > > > > >>> in it. Only superlu_dist has GPU. > > > > >>> > > > > >> > > > > >> I am using superlu_dist on one processor. Should that work? > > > > >> > > > > >> > > > > >>> > > > > >>> But I don't know why there is a crash. > > > > >>> > > > > >>> Sherry > > > > >>> > > > > >>> On Sat, Apr 18, 2020 at 11:44 AM Mark Adams > wrote: > > > > >>> > > > > >>>> Sherry, I did rebase with master this week: > > > > >>>> > > > > >>>> SuperLU: > > > > >>>> Version: 5.2.1 > > > > >>>> Includes: > > > -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include > > > > >>>> Library: > > > > >>>> > -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > > > > >>>> -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib > -lsuperlu > > > > >>>> > > > > >>>> I see the same thing with a debug build. > > > > >>>> > > > > >>>> If anyone is interested in looking at this, I was also able to > see > > > that > > > > >>>> plex/ex10 in my branch, which is a very simple test , also does > not > > > crash > > > > >>>> and also does not seem to use GPUs in SuperLU. > > > > >>>> > > > > >>>> > > > > >>>> On Sat, Apr 18, 2020 at 11:46 AM Xiaoye S. Li > wrote: > > > > >>>> > > > > >>>>> When you install "-download-superlu_dist", that is from > 'master' > > > > >>>>> branch? > > > > >>>>> > > > > >>>>> In the error trace, I recognized this: > > > > >>>>> > > > > >>>>> > [h50n09:102287] [ 9] /ccs/home/adams/petsc/arch- > > > > >>>>> summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_ > > > > >>>>> LU+0xc4)[0x20000195aff4] > > > > >>>>> > > > > >>>>> This is to free the L and U data structures at the end of the > > > program. > > > > >>>>> > > > > >>>>> Sherry > > > > >>>>> > > > > >>>>> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams > > > wrote: > > > > >>>>> > > > > >>>>>> Back to SuperLU + GPUs (adding Sherry) > > > > >>>>>> > > > > >>>>>> I get this error (appended) running 'check', as I said > before. It > > > > >>>>>> looks like ex19 is *failing* with CUDA but it is not clear it > has > > > > >>>>>> anything to do with SuperLU. I can not find these diagnostics > > > that got > > > > >>>>>> printed after the error in PETSc or SuperLU. > > > > >>>>>> > > > > >>>>>> So this is a problem, but moving on to my code (plex/ex11 in > > > > >>>>>> mark/feature-xgc-interface-rebase-v2, configure script > appended). > > > It runs. > > > > >>>>>> I use superlu and GPUs, but they do not seem to be used in > > > SuperLU: > > > > >>>>>> > > > > >>>>>> > > > > >>>>>> > > > > ------------------------------------------------------------------------------------------------------------------------ > > > > >>>>>> Event Count Time (sec) Flop > > > > >>>>>> --- Global --- --- Stage ---- Total GPU - > > > CpuToGpu - - > > > > >>>>>> GpuToCpu - GPU > > > > >>>>>> Max Ratio Max Ratio Max Ratio Mess > > > > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mflop/s > > > Count Size > > > > >>>>>> Count Size %F > > > > >>>>>> > > > > >>>>>> > > > > --------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > > >>>>>> .... > > > > >>>>>> MatLUFactorNum 12 1.0 *2.3416e+01* 1.0 0.00e+00 0.0 > 0.0e+00 > > > > >>>>>> 0.0e+00 0.0e+00 31 0 0 0 0 31 0 0 0 0 0 0 > > > *0 > > > > >>>>>> 0.00e+00 0 0.00e+00 0* > > > > >>>>>> > > > > >>>>>> No CUDA version. The times are the same and no GPU > > > > >>>>>> communication above. So SuperLU does not seem to be using > GPUs. > > > > >>>>>> > > > > >>>>>> > > > > >>>>>> > > > > ------------------------------------------------------------------------------------------------------------------------ > > > > >>>>>> Event Count Time (sec) Flop > > > > >>>>>> --- Global --- --- Stage ---- Total > > > > >>>>>> Max Ratio Max Ratio Max Ratio Mess > > > > >>>>>> AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > > > >>>>>> > > > > >>>>>> > > > > ------------------------------------------------------------------------------------------------------------------------ > > > > >>>>>> .... > > > > >>>>>> MatLUFactorNum 12 1.0 *2.3421e+01* 1.0 0.00e+00 0.0 > 0.0e+00 > > > > >>>>>> 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 > > > > >>>>>> > > > > >>>>>> There are some differences: ex19 use DMDA and I use DMPlex, > > > 'check' > > > > >>>>>> is run in my home directory, where files can not be written, > and > > > I run my > > > > >>>>>> code in the project areas. > > > > >>>>>> > > > > >>>>>> The timings are different without superlu so I think superlu > is > > > being > > > > >>>>>> used. THis is how I run this (w and w/o -mat_superlu_equil > > > -dm_mat_type > > > > >>>>>> sell) > > > > >>>>>> > > > > >>>>>> jsrun -n 1 -a 1 -c 2 -g 1 ./ex113d_no_cuda -dim 3 -dm_view > > > > >>>>>> hdf5:re33d.h5 -vec_view hdf5:re33d.h5::append -test_type > spitzer > > > -Ez 0 > > > > >>>>>> -petscspace_degree 2 -mass_petscspace_degree 2 > > > -petscspace_poly_tensor 1 > > > > >>>>>> -mass_petscspace_poly_tensor 1 -dm_type p8est -ion_masses 4 > > > -ion_charges 2 > > > > >>>>>> -thermal_temps 4,4 -n 1,.5 -n_0 1e20 -ts_monitor > -ts_adapt_monitor > > > > >>>>>> -snes_rtol 1.e-6 -snes_stol 1.e-9 -snes_monitor > > > -snes_converged_reason > > > > >>>>>> -snes_max_it 15 -ts_type arkimex -ts_exact_final_time stepover > > > > >>>>>> -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-3 > > > -ts_dt 1e-1 > > > > >>>>>> -ts_adapt_clip .25,1.05 -ts_adapt_dt_max 10 -ts_adapt_dt_min > 2e-2 > > > > >>>>>> -ts_max_time 3200 -ts_max_steps 1 -ts_adapt_scale_solve_failed > > > 0.75 > > > > >>>>>> -ts_adapt_time_step_increase_delay 5 -pc_type lu -ksp_type > preonly > > > > >>>>>> -amr_levels_max 11 -amr_re_levels 0 -amr_z_refine1 0 > > > -amr_z_refine2 0 > > > > >>>>>> -amr_post_refine 0 -domain_radius -.95 -re_radius 4 > -z_radius1 8 > > > -z_radius2 > > > > >>>>>> .1 -plot_dt .10 -impurity_source_type pulse -pulse_start_time > 2600 > > > > >>>>>> -pulse_width_time 100 -pulse_rate 1e+0 -t_cold .005 -info > > > :dm,tsadapt: > > > > >>>>>> -sub_thread_block_size 4 -options_left -log_view > > > -pc_factor_mat_solver_type > > > > >>>>>> superlu -mat_superlu_equil -dm_mat_type sell > > > > >>>>>> > > > > >>>>>> So there is a bug in ex19 on SUMMIT and I am not getting GPUs > > > turned > > > > >>>>>> on in SuperLU. > > > > >>>>>> Thoughts? > > > > >>>>>> > > > > >>>>>> Thanks, > > > > >>>>>> Mark > > > > >>>>>> > > > > >>>>>> 09:28 mark/feature-xgc-interface-rebase-v2 *= ~/petsc$ make > > > > >>>>>> PETSC_DIR=/ccs/home/adams/petsc > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > > > >>>>>> check > > > > >>>>>> Running check examples to verify correct installation > > > > >>>>>> Using PETSC_DIR=/ccs/home/adams/petsc and > > > > >>>>>> PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > > > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 1 > MPI > > > > >>>>>> process > > > > >>>>>> C/C++ example src/snes/tutorials/ex19 run successfully with 2 > MPI > > > > >>>>>> processes > > > > >>>>>> 2c2,39 > > > > >>>>>> < Number of SNES iterations = 2 > > > > >>>>>> --- > > > > >>>>>> > > > > >>>>>> *> ex19: cudahook.cc:762: CUresult host_free_callback(void*): > > > > >>>>>> Assertion `cacheNode != __null' failed.*> [h50n09:102287] *** > > > > >>>>>> Process received signal *** > > > > >>>>>> > CUDA version: v 10010 > > > > >>>>>> > CUDA Devices: > > > > >>>>>> > > > > > >>>>>> > 0 : Tesla V100-SXM2-16GB 7 0 > > > > >>>>>> > Global memory: 16128 mb > > > > >>>>>> > Shared memory: 48 kb > > > > >>>>>> > Constant memory: 64 kb > > > > >>>>>> > Block registers: 65536 > > > > >>>>>> > > > > > >>>>>> > [h50n09:102287] Signal: Aborted (6) > > > > >>>>>> > [h50n09:102287] Associated errno: Unknown error 1072693248 > > > > >>>>>> (1072693248) > > > > >>>>>> > [h50n09:102287] Signal code: User function (kill, sigsend, > > > abort, > > > > >>>>>> etc.) (0) > > > > >>>>>> > [h50n09:102287] [ 0] [0x2000000504d8] > > > > >>>>>> > [h50n09:102287] [ 1] > > > /lib64/libc.so.6(abort+0x2b4)[0x200021bf2094] > > > > >>>>>> > [h50n09:102287] [ 2] > /lib64/libc.so.6(+0x356d4)[0x200021be56d4] > > > > >>>>>> > [h50n09:102287] [ 3] > > > > >>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x200021be57c4] > > > > >>>>>> > [h50n09:102287] [ 4] > > > > >>>>>> > > > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > > > >>>>>> > [h50n09:102287] [ 5] > > > > >>>>>> > > > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > > > >>>>>> > [h50n09:102287] [ 6] > > > > >>>>>> > > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x20000ed02f50] > > > > >>>>>> > [h50n09:102287] [ 7] > > > > >>>>>> > > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x20000ecd1db8] > > > > >>>>>> > [h50n09:102287] [ 8] > > > > >>>>>> > > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x20000ed12ea4] > > > > >>>>>> > [h50n09:102287] [ 9] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0xc4)[0x20000195aff4] > > > > >>>>>> > [h50n09:102287] [10] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x7cdb70)[0x2000008bdb70] > > > > >>>>>> > [h50n09:102287] [11] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x1ec)[0x2000005f1a8c] > > > > >>>>>> > [h50n09:102287] [12] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xbf8270)[0x200000ce8270] > > > > >>>>>> > [h50n09:102287] [13] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0x1a4)[0x200000d8d5a4] > > > > >>>>>> > [h50n09:102287] [14] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x40c)[0x200000dc498c] > > > > >>>>>> > [h50n09:102287] [15] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xcd56fc)[0x200000dc56fc] > > > > >>>>>> > [h50n09:102287] [16] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x20)[0x200000dc8260] > > > > >>>>>> > [h50n09:102287] [17] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(+0xe0a170)[0x200000efa170] > > > > >>>>>> > [h50n09:102287] [18] > > > > >>>>>> > > > > /ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x814)[0x200000ebd394] > > > > >>>>>> > [h50n09:102287] [19] ./ex19[0x10001a6c] > > > > >>>>>> > [h50n09:102287] [20] > /lib64/libc.so.6(+0x25200)[0x200021bd5200] > > > > >>>>>> > [h50n09:102287] [21] > > > > >>>>>> /lib64/libc.so.6(__libc_start_main+0xc4)[0x200021bd53f4] > > > > >>>>>> > [h50n09:102287] *** End of error message *** > > > > >>>>>> > ERROR: One or more process (first noticed rank 0) > terminated > > > with > > > > >>>>>> signal 6 > > > > >>>>>> /ccs/home/adams/petsc/src/snes/tutorials > > > > >>>>>> Possible problem with ex19 running with superlu_dist, diffs > above > > > > >>>>>> > > > > >>>>>> > > > > >>>>>> > > > > >>>>>> > > > > >>>>>> #!/usr/bin/env python > > > > >>>>>> if __name__ == '__main__': > > > > >>>>>> import sys > > > > >>>>>> import os > > > > >>>>>> sys.path.insert(0, os.path.abspath('config')) > > > > >>>>>> import configure > > > > >>>>>> configure_options = [ > > > > >>>>>> '--with-fc=0', > > > > >>>>>> '--COPTFLAGS=-g -O2 -fPIC -fopenmp', > > > > >>>>>> '--CXXOPTFLAGS=-g -O2 -fPIC -fopenmp', > > > > >>>>>> '--FOPTFLAGS=-g -O2 -fPIC -fopenmp', > > > > >>>>>> '--CUDAOPTFLAGS=-O2 -g', > > > > >>>>>> '--with-ssl=0', > > > > >>>>>> '--with-batch=0', > > > > >>>>>> '--with-cxx=mpicxx', > > > > >>>>>> '--with-mpiexec=jsrun -g1', > > > > >>>>>> '--with-cuda=1', > > > > >>>>>> '--with-cudac=nvcc', > > > > >>>>>> '--download-p4est=1', > > > > >>>>>> '--download-zlib', > > > > >>>>>> '--download-hdf5=1', > > > > >>>>>> '--download-metis', > > > > >>>>>> '--download-superlu', > > > > >>>>>> '--download-superlu_dist', > > > > >>>>>> '--with-make-np=16', > > > > >>>>>> # '--with-hwloc=0', > > > > >>>>>> '--download-parmetis', > > > > >>>>>> # '--download-hypre', > > > > >>>>>> '--download-triangle', > > > > >>>>>> # '--download-amgx', > > > > >>>>>> # '--download-fblaslapack', > > > > >>>>>> '--with-blaslapack-lib=-L' + > > > > >>>>>> os.environ['OLCF_NETLIB_LAPACK_ROOT'] + '/lib64 -lblas > -llapack', > > > > >>>>>> '--with-cc=mpicc', > > > > >>>>>> # '--with-fc=mpif90', > > > > >>>>>> '--with-shared-libraries=1', > > > > >>>>>> # '--known-mpi-shared-libraries=1', > > > > >>>>>> '--with-x=0', > > > > >>>>>> '--with-64-bit-indices=0', > > > > >>>>>> '--with-debugging=0', > > > > >>>>>> 'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp', > > > > >>>>>> '--with-openmp=1', > > > > >>>>>> '--with-threadsaftey=1', > > > > >>>>>> '--with-log=1' > > > > >>>>>> ] > > > > >>>>>> configure.petsc_configure(configure_options) > > > > >>>>>> > > > > >>>>>> > > > > >>>>>> > > > > >>>>>> On Wed, Apr 15, 2020 at 9:58 PM Satish Balay < > balay at mcs.anl.gov> > > > > >>>>>> wrote: > > > > >>>>>> > > > > >>>>>>> The crash is inside Superlu_DIST - so don't know what to > suggest. > > > > >>>>>>> > > > > >>>>>>> Might have to debug this via debugger and check with Sherry. > > > > >>>>>>> > > > > >>>>>>> Satish > > > > >>>>>>> > > > > >>>>>>> On Wed, 15 Apr 2020, Mark Adams wrote: > > > > >>>>>>> > > > > >>>>>>> > Ah, OK 'check' will test SuperLU. Semi worked: > > > > >>>>>>> > > > > > >>>>>>> > s20:13 mark/feature-xgc-interface-rebase *= ~/petsc$ make > > > > >>>>>>> > PETSC_DIR=/ccs/home/adams/petsc > > > > >>>>>>> PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > > > >>>>>>> > check > > > > >>>>>>> > Running check examples to verify correct installation > > > > >>>>>>> > Using PETSC_DIR=/ccs/home/adams/petsc and > > > > >>>>>>> > PETSC_ARCH=arch-summit-dbg-gnu-cuda-omp > > > > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully > with 1 > > > MPI > > > > >>>>>>> process > > > > >>>>>>> > C/C++ example src/snes/tutorials/ex19 run successfully > with 2 > > > MPI > > > > >>>>>>> processes > > > > >>>>>>> > 2c2,38 > > > > >>>>>>> > < Number of SNES iterations = 2 > > > > >>>>>>> > --- > > > > >>>>>>> > > CUDA version: v 10010 > > > > >>>>>>> > > CUDA Devices: > > > > >>>>>>> > > > > > > >>>>>>> > > 0 : Tesla V100-SXM2-16GB 7 0 > > > > >>>>>>> > > Global memory: 16128 mb > > > > >>>>>>> > > Shared memory: 48 kb > > > > >>>>>>> > > Constant memory: 64 kb > > > > >>>>>>> > > Block registers: 65536 > > > > >>>>>>> > > > > > > >>>>>>> > > ex19: cudahook.cc:762: CUresult > host_free_callback(void*): > > > > >>>>>>> Assertion > > > > >>>>>>> > `cacheNode != __null' failed. > > > > >>>>>>> > > [h16n07:78357] *** Process received signal *** > > > > >>>>>>> > > [h16n07:78357] Signal: Aborted (6) > > > > >>>>>>> > > [h16n07:78357] Signal code: (1704218624) > > > > >>>>>>> > > [h16n07:78357] [ 0] [0x2000000504d8] > > > > >>>>>>> > > [h16n07:78357] [ 1] > > > /lib64/libc.so.6(abort+0x2b4)[0x200023992094] > > > > >>>>>>> > > [h16n07:78357] [ 2] > > > /lib64/libc.so.6(+0x356d4)[0x2000239856d4] > > > > >>>>>>> > > [h16n07:78357] [ 3] > > > > >>>>>>> /lib64/libc.so.6(__assert_fail+0x64)[0x2000239857c4] > > > > >>>>>>> > > [h16n07:78357] [ 4] > > > > >>>>>>> > > > > > >>>>>>> > > > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(_Z18host_free_callbackPv+0x2d8)[0x2000000cd2c8] > > > > >>>>>>> > > [h16n07:78357] [ 5] > > > > >>>>>>> > > > > > >>>>>>> > > > > /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/spectrum-mpi-10.3.1.2-20200121-awz2q5brde7wgdqqw4ugalrkukeub4eb/container/../lib/libpami_cudahook.so(cuMemFreeHost+0xb0)[0x2000000c3cc0] > > > > >>>>>>> > > [h16n07:78357] [ 6] > > > > >>>>>>> > > > > > >>>>>>> > > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x42f50)[0x200010aa2f50] > > > > >>>>>>> > > [h16n07:78357] [ 7] > > > > >>>>>>> > > > > > >>>>>>> > > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(+0x11db8)[0x200010a71db8] > > > > >>>>>>> > > [h16n07:78357] [ 8] > > > > >>>>>>> > > > > > >>>>>>> > > > > /sw/summit/cuda/10.1.243/lib64/libcudart.so.10.1(cudaFreeHost+0x74)[0x200010ab2ea4] > > > > >>>>>>> > > [h16n07:78357] [ 9] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libsuperlu_dist.so.6(dDestroy_LU+0x150)[0x200003188058] > > > > >>>>>>> > > [h16n07:78357] [10] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x12ebc6c)[0x2000013dbc6c] > > > > >>>>>>> > > [h16n07:78357] [11] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(MatLUFactorNumeric+0x934)[0x200000d2fae4] > > > > >>>>>>> > > [h16n07:78357] [12] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1cca7a4)[0x200001dba7a4] > > > > >>>>>>> > > [h16n07:78357] [13] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(PCSetUp+0xde0)[0x200001f3f990] > > > > >>>>>>> > > [h16n07:78357] [14] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSetUp+0x1848)[0x200001fc5594] > > > > >>>>>>> > > [h16n07:78357] [15] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x1ed9908)[0x200001fc9908] > > > > >>>>>>> > > [h16n07:78357] [16] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(KSPSolve+0x5d0)[0x200001fcc690] > > > > >>>>>>> > > [h16n07:78357] [17] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(+0x21e16ac)[0x2000022d16ac] > > > > >>>>>>> > > [h16n07:78357] [18] > > > > >>>>>>> > > > > > >>>>>>> > > > > /ccs/home/adams/petsc/arch-summit-dbg-gnu-cuda-omp/lib/libpetsc.so.3.013(SNESSolve+0x23f4)[0x2000022255c0] > > > > >>>>>>> > > [h16n07:78357] [19] ./ex19[0x10002ac8] > > > > >>>>>>> > > [h16n07:78357] [20] > > > /lib64/libc.so.6(+0x25200)[0x200023975200] > > > > >>>>>>> > > [h16n07:78357] [21] > > > > >>>>>>> > /lib64/libc.so.6(__libc_start_main+0xc4)[0x2000239753f4] > > > > >>>>>>> > > [h16n07:78357] *** End of error message *** > > > > >>>>>>> > > ERROR: One or more process (first noticed rank 0) > terminated > > > > >>>>>>> with signal > > > > >>>>>>> > 6 > > > > >>>>>>> > /ccs/home/adams/petsc/src/snes/tutorials > > > > >>>>>>> > Possible problem with ex19 running with superlu_dist, diffs > > > above > > > > >>>>>>> > ========================================= > > > > >>>>>>> > > > > > >>>>>>> > On Wed, Apr 15, 2020 at 5:58 PM Satish Balay < > > > balay at mcs.anl.gov> > > > > >>>>>>> wrote: > > > > >>>>>>> > > > > > >>>>>>> > > Please send configure.log > > > > >>>>>>> > > > > > > >>>>>>> > > This is what I get on my linux build: > > > > >>>>>>> > > > > > > >>>>>>> > > [balay at p1 petsc]$ ./configure > > > > >>>>>>> > > --with-mpi-dir=/home/petsc/soft/openmpi-4.0.2-cuda > > > --with-cuda=1 > > > > >>>>>>> > > --with-openmp=1 --download-superlu-dist=1 && make && make > > > check > > > > >>>>>>> > > > > > > >>>>>>> > > Running check examples to verify correct installation > > > > >>>>>>> > > Using PETSC_DIR=/home/balay/petsc and > > > > >>>>>>> PETSC_ARCH=arch-linux-c-debug > > > > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully > with 1 > > > > >>>>>>> MPI process > > > > >>>>>>> > > C/C++ example src/snes/tutorials/ex19 run successfully > with 2 > > > > >>>>>>> MPI processes > > > > >>>>>>> > > 1a2,19 > > > > >>>>>>> > > > CUDA version: v 10020 > > > > >>>>>>> > > > CUDA Devices: > > > > >>>>>>> > > > > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > > > > >>>>>>> > > > Global memory: 3911 mb > > > > >>>>>>> > > > Shared memory: 48 kb > > > > >>>>>>> > > > Constant memory: 64 kb > > > > >>>>>>> > > > Block registers: 65536 > > > > >>>>>>> > > > > > > > >>>>>>> > > > CUDA version: v 10020 > > > > >>>>>>> > > > CUDA Devices: > > > > >>>>>>> > > > > > > > >>>>>>> > > > 0 : Quadro T2000 7 5 > > > > >>>>>>> > > > Global memory: 3911 mb > > > > >>>>>>> > > > Shared memory: 48 kb > > > > >>>>>>> > > > Constant memory: 64 kb > > > > >>>>>>> > > > Block registers: 65536 > > > > >>>>>>> > > > > > > > >>>>>>> > > /home/balay/petsc/src/snes/tutorials > > > > >>>>>>> > > Possible problem with ex19 running with superlu_dist, > diffs > > > above > > > > >>>>>>> > > ========================================= > > > > >>>>>>> > > Fortran example src/snes/tutorials/ex5f run successfully > > > with 1 > > > > >>>>>>> MPI process > > > > >>>>>>> > > Completed test examples > > > > >>>>>>> > > > > > > >>>>>>> > > > > > > >>>>>>> > > On Wed, 15 Apr 2020, Mark Adams wrote: > > > > >>>>>>> > > > > > > >>>>>>> > > > On Wed, Apr 15, 2020 at 5:17 PM Satish Balay < > > > > >>>>>>> balay at mcs.anl.gov> wrote: > > > > >>>>>>> > > > > > > > >>>>>>> > > > > The build should work. It should give some verbose > info > > > [at > > > > >>>>>>> runtime] > > > > >>>>>>> > > > > regarding GPUs - from the following code. > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > > > >>>>>>> > > > I don't see that and I am running GPUs in my code and > have > > > > >>>>>>> gotten > > > > >>>>>>> > > cusparse > > > > >>>>>>> > > > LU to run. Should I use '-info :sys:' ? > > > > >>>>>>> > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > > >>>>> SRC/cublas_utils.c >>>>>>>>>>> > > > > >>>>>>> > > > > void DisplayHeader() > > > > >>>>>>> > > > > { > > > > >>>>>>> > > > > const int kb = 1024; > > > > >>>>>>> > > > > const int mb = kb * kb; > > > > >>>>>>> > > > > // cout << "NBody.GPU" << endl << "=========" << > > > endl << > > > > >>>>>>> endl; > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > printf("CUDA version: v %d\n",CUDART_VERSION); > > > > >>>>>>> > > > > //cout << "Thrust version: v" << > > > THRUST_MAJOR_VERSION << > > > > >>>>>>> "." << > > > > >>>>>>> > > > > THRUST_MINOR_VERSION << endl << endl; > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > int devCount; > > > > >>>>>>> > > > > cudaGetDeviceCount(&devCount); > > > > >>>>>>> > > > > printf( "CUDA Devices: \n \n"); > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > Satish > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > On Wed, 15 Apr 2020, Junchao Zhang wrote: > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > I remember Barry said superlu gpu support is > broken. > > > > >>>>>>> > > > > > --Junchao Zhang > > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > On Wed, Apr 15, 2020 at 3:47 PM Mark Adams < > > > > >>>>>>> mfadams at lbl.gov> wrote: > > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > > How does one use SuperLU with GPUs. I don't seem > to > > > get > > > > >>>>>>> any GPU > > > > >>>>>>> > > > > > > performance data so I assume GPUs are not getting > > > turned > > > > >>>>>>> on. Am I > > > > >>>>>>> > > wrong > > > > >>>>>>> > > > > > > about that? > > > > >>>>>>> > > > > > > > > > > >>>>>>> > > > > > > I configure with: > > > > >>>>>>> > > > > > > configure options: --with-fc=0 --COPTFLAGS="-g > -O2 > > > -fPIC > > > > >>>>>>> -fopenmp" > > > > >>>>>>> > > > > > > --CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" > --FOPTFLAGS="-g > > > > >>>>>>> -O2 -fPIC > > > > >>>>>>> > > > > -fopenmp" > > > > >>>>>>> > > > > > > --CUDAOPTFLAGS="-O2 -g" --with-ssl=0 > --with-batch=0 > > > > >>>>>>> > > --with-cxx=mpicxx > > > > >>>>>>> > > > > > > --with-mpiexec="jsrun -g1" --with-cuda=1 > > > > >>>>>>> --with-cudac=nvcc > > > > >>>>>>> > > > > > > --download-p4est=1 --download-zlib > --download-hdf5=1 > > > > >>>>>>> > > --download-metis > > > > >>>>>>> > > > > > > --download-superlu --download-superlu_dist > > > > >>>>>>> --with-make-np=16 > > > > >>>>>>> > > > > > > --download-parmetis --download-triangle > > > > >>>>>>> > > > > > > > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > --with-blaslapack-lib="-L/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/gcc-6.4.0/netlib-lapack-3.8.0-wcabdyqhdi5rooxbkqa6x5d7hxyxwdkm/lib64 > > > > >>>>>>> > > > > > > -lblas -llapack" --with-cc=mpicc > > > > >>>>>>> --with-shared-libraries=1 > > > > >>>>>>> > > --with-x=0 > > > > >>>>>>> > > > > > > --with-64-bit-indices=0 --with-debugging=0 > > > > >>>>>>> > > > > > > PETSC_ARCH=arch-summit-opt-gnu-cuda-omp > > > --with-openmp=1 > > > > >>>>>>> > > > > > > --with-threadsaftey=1 --with-log=1 > > > > >>>>>>> > > > > > > > > > > >>>>>>> > > > > > > Thanks, > > > > >>>>>>> > > > > > > Mark > > > > >>>>>>> > > > > > > > > > > >>>>>>> > > > > > > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > > > >>>>>>> > > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > > > >>>>>>> > > > > > >>>>>>> > > > > >>>>>>> > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbuerkle at web.de Sun Apr 19 11:21:29 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Sun, 19 Apr 2020 18:21:29 +0200 Subject: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX Message-ID: Hi, ? I have a question about the behavior of?MAT_INITIAL_MATRIX for MatMatMult. I a have a set of MPIDENSE and MPIAIJ matrices for which I have defined the number of local rows (and local columns) manually, i.e. not used PETSC_DECIDE for MatSetSizes. The number of local rows is different from what PETSC_DECIDE would choose. If I do a MatMatMult with A=MPIDENSE and B=MPIAIJ with MAT_INITIAL_MATRIX then matrix C will have the number of local rows corresponding to A and B, namely what I have defined with MatSetSizes when creating the matrices A and B. But when both matrices are dense, A=MPIDENSE and B=MPIDENSE, then the resulting matrix C will have different number of local rows, namely what I would get with if I would create the matrix with PETSC_DECIDE. Is this behavior normal? The problem is that I have to multiply both resulting matrices which will then throw a "Nonconforming object sizes error" as they have different number of local rows. Any ideas what goes wrong here? Marius From stefano.zampini at gmail.com Sun Apr 19 11:26:30 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Sun, 19 Apr 2020 19:26:30 +0300 Subject: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX In-Reply-To: References: Message-ID: Matrix C should always inherit the number of local rows from A and the number of local columns from B. Is it not the case for your code? If so, please provide a MWE to reproduce Also, which version of PETSc are you using? Il Dom 19 Apr 2020, 19:21 Marius Buerkle ha scritto: > > Hi, > > I have a question about the behavior of MAT_INITIAL_MATRIX for MatMatMult. > I a have a set of MPIDENSE and MPIAIJ matrices for which I have defined the > number of local rows (and local columns) manually, i.e. not used > PETSC_DECIDE for MatSetSizes. The number of local rows is different from > what PETSC_DECIDE would choose. If I do a MatMatMult with A=MPIDENSE and > B=MPIAIJ with MAT_INITIAL_MATRIX then matrix C will have the number of > local rows corresponding to A and B, namely what I have defined with > MatSetSizes when creating the matrices A and B. But when both matrices are > dense, A=MPIDENSE and B=MPIDENSE, then the resulting matrix C will have > different number of local rows, namely what I would get with if I would > create the matrix with PETSC_DECIDE. Is this behavior normal? The problem > is that I have to multiply both resulting matrices which will then throw a > "Nonconforming object sizes error" as they have different number of local > rows. Any ideas what goes wrong here? > > > Marius > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Sun Apr 19 13:24:30 2020 From: hzhang at mcs.anl.gov (Zhang, Hong) Date: Sun, 19 Apr 2020 18:24:30 +0000 Subject: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX In-Reply-To: References: , Message-ID: Marius, C1 = Ampidense*Bmpiaij inherits the number of local rows from A and the number of local columns from B. C2 = Ampidense*Bmpidense is computed via external package Elemental, which petsc does not dictate the parallel layout of C2 in current petsc/elemental interface. I am not sure if we can do it. Do you want C2 maintains local number of A? Hong ________________________________ From: petsc-users on behalf of Stefano Zampini Sent: Sunday, April 19, 2020 11:26 AM To: Marius Buerkle Cc: PETSc users list Subject: Re: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX Matrix C should always inherit the number of local rows from A and the number of local columns from B. Is it not the case for your code? If so, please provide a MWE to reproduce Also, which version of PETSc are you using? Il Dom 19 Apr 2020, 19:21 Marius Buerkle > ha scritto: Hi, I have a question about the behavior of MAT_INITIAL_MATRIX for MatMatMult. I a have a set of MPIDENSE and MPIAIJ matrices for which I have defined the number of local rows (and local columns) manually, i.e. not used PETSC_DECIDE for MatSetSizes. The number of local rows is different from what PETSC_DECIDE would choose. If I do a MatMatMult with A=MPIDENSE and B=MPIAIJ with MAT_INITIAL_MATRIX then matrix C will have the number of local rows corresponding to A and B, namely what I have defined with MatSetSizes when creating the matrices A and B. But when both matrices are dense, A=MPIDENSE and B=MPIDENSE, then the resulting matrix C will have different number of local rows, namely what I would get with if I would create the matrix with PETSC_DECIDE. Is this behavior normal? The problem is that I have to multiply both resulting matrices which will then throw a "Nonconforming object sizes error" as they have different number of local rows. Any ideas what goes wrong here? Marius -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbuerkle at web.de Sun Apr 19 19:47:59 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Mon, 20 Apr 2020 02:47:59 +0200 Subject: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Sun Apr 19 21:13:34 2020 From: hzhang at mcs.anl.gov (Zhang, Hong) Date: Mon, 20 Apr 2020 02:13:34 +0000 Subject: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX In-Reply-To: References: , Message-ID: Marius, I'll test it tomorrow. Hong ________________________________ From: Marius Buerkle Sent: Sunday, April 19, 2020 7:47 PM To: Zhang, Hong Cc: Stefano Zampini ; PETSc users list Subject: Aw: Re: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX Hi Hong, Yes exactly, I would like C2 to maintain the local number of A. Do you think this is possible? I tried to allocated A before with the correct local number and use MAT_REUSE but this gave a segmentation fault. Best Marius Marius, C1 = Ampidense*Bmpiaij inherits the number of local rows from A and the number of local columns from B. C2 = Ampidense*Bmpidense is computed via external package Elemental, which petsc does not dictate the parallel layout of C2 in current petsc/elemental interface. I am not sure if we can do it. Do you want C2 maintains local number of A? Hong ________________________________ From: petsc-users on behalf of Stefano Zampini Sent: Sunday, April 19, 2020 11:26 AM To: Marius Buerkle Cc: PETSc users list Subject: Re: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX Matrix C should always inherit the number of local rows from A and the number of local columns from B. Is it not the case for your code? If so, please provide a MWE to reproduce Also, which version of PETSc are you using? Il Dom 19 Apr 2020, 19:21 Marius Buerkle > ha scritto: Hi, I have a question about the behavior of MAT_INITIAL_MATRIX for MatMatMult. I a have a set of MPIDENSE and MPIAIJ matrices for which I have defined the number of local rows (and local columns) manually, i.e. not used PETSC_DECIDE for MatSetSizes. The number of local rows is different from what PETSC_DECIDE would choose. If I do a MatMatMult with A=MPIDENSE and B=MPIAIJ with MAT_INITIAL_MATRIX then matrix C will have the number of local rows corresponding to A and B, namely what I have defined with MatSetSizes when creating the matrices A and B. But when both matrices are dense, A=MPIDENSE and B=MPIDENSE, then the resulting matrix C will have different number of local rows, namely what I would get with if I would create the matrix with PETSC_DECIDE. Is this behavior normal? The problem is that I have to multiply both resulting matrices which will then throw a "Nonconforming object sizes error" as they have different number of local rows. Any ideas what goes wrong here? Marius -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbuerkle at web.de Sun Apr 19 22:35:57 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Mon, 20 Apr 2020 05:35:57 +0200 Subject: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Mon Apr 20 09:11:17 2020 From: hzhang at mcs.anl.gov (Zhang, Hong) Date: Mon, 20 Apr 2020 14:11:17 +0000 Subject: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX In-Reply-To: References: , Message-ID: Marius, Good to know. I assume the issue is resolved. Let us know if you see any problem. Hong ________________________________ From: Marius Buerkle Sent: Sunday, April 19, 2020 10:35 PM To: Zhang, Hong Cc: Stefano Zampini ; PETSc users list Subject: Aw: Re: Re: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX Hi Hong and Stefano, I just checked with the latest PETSC commit (v3.13-145-gf227361) there C2 maintains the local number of A, for commit v3.12.2-537-g5f77d1e which I tried initially I get a different number of local rows for A. While I did not check it thoroughly it seems to work with v3.13-145-gf227361. So I assume this was addressed in some commit between this too releases. Marius Marius, I'll test it tomorrow. Hong ________________________________ From: Marius Buerkle Sent: Sunday, April 19, 2020 7:47 PM To: Zhang, Hong Cc: Stefano Zampini ; PETSc users list Subject: Aw: Re: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX Hi Hong, Yes exactly, I would like C2 to maintain the local number of A. Do you think this is possible? I tried to allocated A before with the correct local number and use MAT_REUSE but this gave a segmentation fault. Best Marius Marius, C1 = Ampidense*Bmpiaij inherits the number of local rows from A and the number of local columns from B. C2 = Ampidense*Bmpidense is computed via external package Elemental, which petsc does not dictate the parallel layout of C2 in current petsc/elemental interface. I am not sure if we can do it. Do you want C2 maintains local number of A? Hong ________________________________ From: petsc-users on behalf of Stefano Zampini Sent: Sunday, April 19, 2020 11:26 AM To: Marius Buerkle Cc: PETSc users list Subject: Re: [petsc-users] MatMatMult with MAT_INITIAL_MATRIX Matrix C should always inherit the number of local rows from A and the number of local columns from B. Is it not the case for your code? If so, please provide a MWE to reproduce Also, which version of PETSc are you using? Il Dom 19 Apr 2020, 19:21 Marius Buerkle > ha scritto: Hi, I have a question about the behavior of MAT_INITIAL_MATRIX for MatMatMult. I a have a set of MPIDENSE and MPIAIJ matrices for which I have defined the number of local rows (and local columns) manually, i.e. not used PETSC_DECIDE for MatSetSizes. The number of local rows is different from what PETSC_DECIDE would choose. If I do a MatMatMult with A=MPIDENSE and B=MPIAIJ with MAT_INITIAL_MATRIX then matrix C will have the number of local rows corresponding to A and B, namely what I have defined with MatSetSizes when creating the matrices A and B. But when both matrices are dense, A=MPIDENSE and B=MPIDENSE, then the resulting matrix C will have different number of local rows, namely what I would get with if I would create the matrix with PETSC_DECIDE. Is this behavior normal? The problem is that I have to multiply both resulting matrices which will then throw a "Nonconforming object sizes error" as they have different number of local rows. Any ideas what goes wrong here? Marius -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Mon Apr 20 16:45:21 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Mon, 20 Apr 2020 16:45:21 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> Message-ID: Hello, Randy, I further looked at the problem and believe it was due to overwhelming traffic. The code sometimes fails at MPI_Waitall. I printed out MPI error strings of bad MPI Statuses. One of them is like "MPID_nem_tcp_connpoll(1845): Communication error with rank 25: Connection reset by peer", which is a tcp error and has nothing to do with petsc. Further investigation shows in the case of 5120 ranks with 320 sub communicators, during VecScatterSetUp, each rank has around 640 isends/irecvs neighbors, and quite a few ranks has 1280 isends neighbors. I guess these overwhelming isends occasionally crashed the connection. The piece of code in VecScatterSetUp is to calculate the communication pattern. With index sets "having good locality", the calculate itself incurs less traffic. Here good locality means indices in an index set mostly point to local entries. However, the AOApplicationToPetsc() call in your code unnecessarily ruined the good petsc ordering. If we remove AOApplicationToPetsc() (the vecscatter result is still correct) , then each rank uniformly has around 320 isends/irecvs. So, test with this modification and see if it really works in your environment. If not applicable, we can provide options in petsc to carry out the communication in phases to avoid flooding the network (though it is better done by MPI). Thanks. --Junchao Zhang On Fri, Apr 17, 2020 at 10:47 AM Randall Mackie wrote: > Hi Junchao, > > Thank you for your efforts. > We tried petsc-3.13.0 but it made no difference. > We think now the issue are with sysctl parameters, and increasing those > seemed to have cleared up the problem. > This also most likely explains how different clusters had different > behaviors with our test code. > > We are now running our code and will report back once we are sure that > there are no further issues. > > Thanks again for your help. > > Randy M. > > On Apr 17, 2020, at 8:09 AM, Junchao Zhang > wrote: > > > > > On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang > wrote: > >> Randy, >> I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also >> found the error went away with petsc-3.13. However, I have not figured out >> what is the bug and which commit fixed it :). >> So at your side, it is better to use the latest petsc. >> > I want to add that even with petsc-3.12.4 the error is random. I was > only able to reproduce the error once, so I can not claim petsc-3.13 > actually fixed it (or, the bug is really in petsc). > > >> --Junchao Zhang >> >> >> On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang >> wrote: >> >>> Randy, >>> Up to now I could not reproduce your error, even with the biggest >>> mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 >>> While I continue doing test, you can try other options. It looks you >>> want to duplicate a vector to subcomms. I don't think you need the two >>> lines: >>> >>> call AOApplicationToPetsc(aoParent,nis,ind1,ierr) >>> call AOApplicationToPetsc(aoSub,nis,ind2,ierr) >>> >>> In addition, you can use simpler and more memory-efficient index sets. >>> There is a petsc example for this task, see case 3 in >>> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c >>> BTW, it is good to use petsc master so we are on the same page. >>> --Junchao Zhang >>> >>> >>> On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie >>> wrote: >>> >>>> Hi Junchao, >>>> >>>> So I was able to create a small test code that duplicates the issue we >>>> have been having, and it is attached to this email in a zip file. >>>> Included is the test.F90 code, the commands to duplicate crash and to >>>> duplicate a successful run, output errors, and our petsc configuration. >>>> >>>> Our findings to date include: >>>> >>>> The error is reproducible in a very short time with this script >>>> It is related to nproc*nsubs and (although to a less extent) to DM grid >>>> size >>>> It happens regardless of MPI implementation (mpich, intel mpi 2018, >>>> 2019, openmpi) or compiler (gfortran/gcc , intel 2018) >>>> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to >>>> slightly increase the limit, but still fails on the full machine set. >>>> Nothing looks interesting on valgrind >>>> >>>> Our initial tests were carried out on an Azure cluster, but we also >>>> tested on our smaller cluster, and we found the following: >>>> >>>> Works: >>>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test >>>> -nsubs 80 -nx 100 -ny 100 -nz 100 >>>> >>>> Crashes (this works on Azure) >>>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test >>>> -nsubs 80 -nx 100 -ny 100 -nz 100 >>>> >>>> So it looks like it may also be related to the physical number of nodes >>>> as well. >>>> >>>> In any case, even with 2560 processes on 192 cores the memory does not >>>> go above 3.5 Gbyes so you don?t need a huge cluster to test. >>>> >>>> Thanks, >>>> >>>> Randy M. >>>> >>>> >>>> >>>> On Apr 14, 2020, at 12:23 PM, Junchao Zhang >>>> wrote: >>>> >>>> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I >>>> doubted it was the problem. Even if users configure petsc with 64-bit >>>> indices, we use PetscMPIInt in MPI calls. So it is not a problem. >>>> Try -vecscatter_type mpi1 to restore to the original VecScatter >>>> implementation. If the problem still remains, could you provide a test >>>> example for me to debug? >>>> >>>> --Junchao Zhang >>>> >>>> >>>> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie >>>> wrote: >>>> >>>>> Hi Junchao, >>>>> >>>>> We have tried your two suggestions but the problem remains. >>>>> And the problem seems to be on the MPI_Isend line 117 in >>>>> PetscGatherMessageLengths and not MPI_AllReduce. >>>>> >>>>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking >>>>> the problem must be elsewhere and not MPI. >>>>> >>>>> Give that this is a 64 bit indices build of PETSc, is there some >>>>> possible incompatibility between PETSc and MPI calls? >>>>> >>>>> We are open to any other possible suggestions to try as other than >>>>> valgrind on thousands of processes we seem to have run out of ideas. >>>>> >>>>> Thanks, Randy M. >>>>> >>>>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang >>>>> wrote: >>>>> >>>>> >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang < >>>>> junchao.zhang at gmail.com> wrote: >>>>> >>>>>> Randy, >>>>>> Someone reported similar problem before. It turned out an Intel >>>>>> MPI MPI_Allreduce bug. A workaround is setting the environment variable >>>>>> I_MPI_ADJUST_ALLREDUCE=1.arr >>>>>> >>>>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>>>> >>>>>> But you mentioned mpich also had the error. So maybe the problem >>>>>> is not the same. So let's try the workaround first. If it doesn't work, add >>>>>> another petsc option -build_twosided allreduce, which is a workaround for >>>>>> Intel MPI_Ibarrier bugs we met. >>>>>> Thanks. >>>>>> --Junchao Zhang >>>>>> >>>>>> >>>>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie < >>>>>> rlmackie862 at gmail.com> wrote: >>>>>> >>>>>>> Dear PETSc users, >>>>>>> >>>>>>> We are trying to understand an issue that has come up in running our >>>>>>> code on a large cloud cluster with a large number of processes and subcomms. >>>>>>> This is code that we use daily on multiple clusters without >>>>>>> problems, and that runs valgrind clean for small test problems. >>>>>>> >>>>>>> The run generates the following messages, but doesn?t crash, just >>>>>>> seems to hang with all processes continuing to show activity: >>>>>>> >>>>>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>>>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>>>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>>>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>>>>> >>>>>>> >>>>>>> Looking at line 117 in PetscGatherMessageLengths we find the >>>>>>> offending statement is the MPI_Isend: >>>>>>> >>>>>>> >>>>>>> /* Post the Isends with the message length-info */ >>>>>>> for (i=0,j=0; i>>>>>> if (ilengths[i]) { >>>>>>> ierr = >>>>>>> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>>>>> j++; >>>>>>> } >>>>>>> } >>>>>>> >>>>>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving >>>>>>> the same problem. >>>>>>> >>>>>>> We suspect there is some limit being set on this cloud cluster on >>>>>>> the number of file connections or something, but we don?t know. >>>>>>> >>>>>>> Anyone have any ideas? We are sort of grasping for straws at this >>>>>>> point. >>>>>>> >>>>>>> Thanks, Randy M. >>>>>>> >>>>>> >>>>> >>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Tue Apr 21 04:13:38 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Tue, 21 Apr 2020 06:13:38 -0300 Subject: [petsc-users] Ignoring PETSC_ARCH for make check? In-Reply-To: References: <874ktgcynm.fsf@jedbrown.org> Message-ID: Satish, Great, thanks. Santiago On Sun, Apr 19, 2020 at 12:23 PM Satish Balay wrote: > PETSc supports both inplace multiple builds - and prefix builds. Most > packages don't support inplace multiple builds. For inplace multiple builds > - you need the PETSC_ARCH concept. But not for prefix builds. > > > i.e: 2 inplace builds. > > ./configure PETSC_ARCH=arch-build1 --with-cc=gcc etc.. > make > ./configure PETSC_ARCH=arch-build2 --with-cc=icc etc.. > make > > 2 prefix builds > > ./configure --prefix=$HOME/soft/petsc-install-1 PETSC_ARCH=arch-build1 > --with-cc=gcc > make > make install > > ./configure --prefix=$HOME/soft/petsc-install-1 PETSC_ARCH=arch-build2 > --with-cc=icc > make > make install > > Note: using a different PETSC_ARCH above so that the intermediate build > files from the first build don't conflict with those from the second build. > > Note: due to same files used in both cases - i.e prefix and inplace [i.e > petsc sources from build location] for make check - a wrong value of > 'PETSC_ARCH' can break 'make check' > > The files installed in prefix location don't care about PETSC_ARCH value. > > Satish > > On Sun, 19 Apr 2020, san.temporal at gmail.com wrote: > > > Ok, the, the second option applies... I had forgotten about this > > observation from the multiple times I installed PETSc in the past. > > > > Then, two questions come to mind: > > > > 1. Why is it set up like that? > > 2. What is the difference in behaviour? I see the same output from both > > options. > > > > Thanks again! > > > > On Sat, Apr 18, 2020 at 5:43 PM Jed Brown wrote: > > > > > It's intentional and been like this for ages. Prefix installs have > only > > > PETSC_DIR (just a path, like other packages), and *must not* set > > > PETSC_ARCH. > > > > > > san.temporal at gmail.com writes: > > > > > > > Hi all, > > > > > > > > I have just successfully compiled 3.13.0. But with install this is > what I > > > > get > > > > > > > > $ make > > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > > > PETSC_ARCH=arch-linux2-c-opt install > > > > *** Using > > > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > > > PETSC_ARCH=arch-linux2-c-opt *** > > > > *** Installing PETSc at prefix location: /home/santiago/usr/local > > > *** > > > > ==================================== > > > > Install complete. > > > > Now to check if the libraries are working do (in current > directory): > > > > make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" check > > > > ==================================== > > > > /usr/bin/make --no-print-directory -f makefile > > > > PETSC_ARCH=arch-linux2-c-opt > > > > PETSC_DIR=/home/santiago/Documents/installers/petsc/petsc-3.13.0 > > > > mpi4py-install petsc4py-install libmesh-install mfem-install > > > slepc-install > > > > hpddm-install amrex-install > > > > make[2]: Nothing to be done for 'mpi4py-install'. > > > > make[2]: Nothing to be done for 'petsc4py-install'. > > > > make[2]: Nothing to be done for 'libmesh-install'. > > > > make[2]: Nothing to be done for 'mfem-install'. > > > > make[2]: Nothing to be done for 'slepc-install'. > > > > make[2]: Nothing to be done for 'hpddm-install'. > > > > make[2]: Nothing to be done for 'amrex-install'. > > > > > > > > What is strange to me is that I am instructed to execute a line with > > > > PETSC_ARCH=", while my environment has PETSC_ARCH=arch-linux2-c-opt > > > > Why is that? > > > > > > > > PS: The same happened to me with various other compilations I have > just > > > > tested, with 3.9, 3.10, 3.11, 3.12 > > > > > > > > PS2: I do not recall seeing this ever before, although I may have > missed > > > > it/forgotten. > > > > > > > > Thanks in advance, > > > > Santiago > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushikv318 at gmail.com Tue Apr 21 09:43:29 2020 From: kaushikv318 at gmail.com (Kaushik Vijaykumar) Date: Tue, 21 Apr 2020 10:43:29 -0400 Subject: [petsc-users] petsc error disappears when I print something in the function Message-ID: Hello group, I have been trying to navigate a weird error that I have found in my FEA code that I am developing using PetSc. The error occurs when, i execute a call to stiffness generation of a Tetrahederal element. The code returns an memory error if I don't print the following statements in the function, (see the ierr print statements below): for (i1=1; i1<4; i1++) // Loop 6b { for (j1=1; j1<4; j1++) // Loop 7b { for (k1=1; k1<4; k1++) // Loop 8 { for (l1=1; l1<4; l1++) // Loop 9 { s[ii1+i1-1][jj1+j1-1] = s[ii1+i1-1][jj1+j1-1]+C4[i1][k1][j1][l1]*w[k1][l1]*weight; ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"C4 %f \n",C4[i1][k1][j1][l1]);CHKERRQ(ierr); ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"w %f \n",w[k1][l1]);CHKERRQ(ierr); ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"weight %f \n",weight);CHKERRQ(ierr); } // Loop 9 } // Loop 8 } // Loop 7b } // Loop 6b Any help on this is really appreciated. Thanks Kaushik -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 21 09:45:58 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 21 Apr 2020 10:45:58 -0400 Subject: [petsc-users] petsc error disappears when I print something in the function In-Reply-To: References: Message-ID: You are overwriting memory somewhere. The prints just move it around. I suggest running with valgrind. Thanks, Matt On Tue, Apr 21, 2020 at 10:44 AM Kaushik Vijaykumar wrote: > Hello group, > > I have been trying to navigate a weird error that I have found in my FEA > code that I am developing using PetSc. The error occurs when, i execute a > call to stiffness generation of a Tetrahederal element. The code returns an > memory error if I don't print the following statements in the function, > (see the ierr print statements below): > > for (i1=1; i1<4; i1++) // Loop 6b > { > for (j1=1; j1<4; j1++) // Loop 7b > { > for (k1=1; k1<4; k1++) // Loop 8 > { > for (l1=1; l1<4; l1++) // Loop 9 > { > s[ii1+i1-1][jj1+j1-1] = > s[ii1+i1-1][jj1+j1-1]+C4[i1][k1][j1][l1]*w[k1][l1]*weight; > ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"C4 %f > \n",C4[i1][k1][j1][l1]);CHKERRQ(ierr); > ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"w %f > \n",w[k1][l1]);CHKERRQ(ierr); > ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"weight %f > \n",weight);CHKERRQ(ierr); > } // Loop 9 > } // Loop 8 > } // Loop 7b > } // Loop 6b > > > Any help on this is really appreciated. > > Thanks > Kaushik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Tue Apr 21 09:50:49 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Tue, 21 Apr 2020 16:50:49 +0200 Subject: [petsc-users] petsc error disappears when I print something in the function In-Reply-To: References: Message-ID: On Tue 21. Apr 2020 at 16:47, Matthew Knepley wrote: > You are overwriting memory somewhere. The prints just move it around. I > suggest running with valgrind. > Matt is right. However, judging by the code snippet I bet all the arrays in question are statically allocated, thus valgrind may be of somewhat limited use. If you send the entire function, or all the related pieces of code, someone in the list might spot the error. Thanks Dave > Thanks, > > Matt > > On Tue, Apr 21, 2020 at 10:44 AM Kaushik Vijaykumar > wrote: > >> Hello group, >> >> I have been trying to navigate a weird error that I have found in my FEA >> code that I am developing using PetSc. The error occurs when, i execute a >> call to stiffness generation of a Tetrahederal element. The code returns an >> memory error if I don't print the following statements in the function, >> (see the ierr print statements below): >> >> for (i1=1; i1<4; i1++) // Loop 6b >> { >> for (j1=1; j1<4; j1++) // Loop 7b >> { >> for (k1=1; k1<4; k1++) // Loop 8 >> { >> for (l1=1; l1<4; l1++) // Loop 9 >> { >> s[ii1+i1-1][jj1+j1-1] = >> s[ii1+i1-1][jj1+j1-1]+C4[i1][k1][j1][l1]*w[k1][l1]*weight; >> ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"C4 %f >> \n",C4[i1][k1][j1][l1]);CHKERRQ(ierr); >> ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"w %f >> \n",w[k1][l1]);CHKERRQ(ierr); >> ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"weight %f >> \n",weight);CHKERRQ(ierr); >> } // Loop 9 >> } // Loop 8 >> } // Loop 7b >> } // Loop 6b >> >> >> Any help on this is really appreciated. >> >> Thanks >> Kaushik >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushikv318 at gmail.com Tue Apr 21 10:29:01 2020 From: kaushikv318 at gmail.com (Kaushik Vijaykumar) Date: Tue, 21 Apr 2020 11:29:01 -0400 Subject: [petsc-users] petsc error disappears when I print something in the function In-Reply-To: References: Message-ID: Thanks Dave and Mathew. I will keep looking for the bug and here is the full piece of code. PetscInt kel_tet10CX(PetscScalar *X, PetscScalar *Y, PetscScalar *Z, PetscScalar *ke, PetscInt elmat, PetscScalar dmg, PetscInt el_has_orient, PetscScalar rot[3][3], PetscScalar temperature, PetscScalar *thermforce, PetscInt damflag, PetscScalar Cmat[6][6], PetscScalar alphavec[6]){ /////////PETSc declarations PetscErrorCode ierr; PetscInt ii,jj,kk,k1,ll,l1,i,j,k,l,i1,j1,jj1,ii1,iflag; PetscInt nope=10; //// nodes on the element PetscScalar alpha2[4][4],alpha2unrot[4][4],C[6][6],C4[4][4][4][4],C4unrot[4][4][4][4],eta,etas[9],GP[4],s[100][100],shp[10][4],shpj[4][11],w[4][4],weight,wts[4],xi,xis[4],xl[10][3],xsj,xsjj,zeta,zetas[4]; /////////////Prototypes PetscInt constit(PetscInt elmat,PetscScalar C4[4][4][4][4], PetscScalar alpha2[4][4], PetscScalar dmg, PetscScalar temperature, PetscInt damflag, PetscScalar Cmat[6][6], PetscScalar alphavec[6]); void shape10tet_(PetscScalar *xi, PetscScalar *eta, PetscScalar *zeta, PetscScalar xl[4][11], PetscScalar *xsj, PetscScalar shp[5][11], PetscInt *iflag); // you have to dimension the array arguments to the size Fortran sees them // printf("\n Entered SPAF element Tet10 \n"); /////////////Code if (el_has_orient == 1){ierr = constit(elmat,C4unrot,alpha2unrot,dmg,temperature,damflag,Cmat,alphavec);ierr = rot4(rot,C4unrot,C4);ierr = rot2(rot,alpha2unrot,alpha2);} else {ierr = constit(elmat,C4,alpha2,dmg,temperature,damflag,Cmat,alphavec);} // printf("\n Came back from spaf materials to SPAF element Tet10 \n"); // ierr = PetscPrintf(PETSC_COMM_SELF,"In tet10 \n");CHKERRQ(ierr); // Gauss points and weights xis[0] = 0.138196601125011; xis[1] = 0.585410196624968; xis[2] = 0.138196601125011; xis[3] = 0.138196601125011; etas[0] = 0.138196601125011; etas[1] = 0.138196601125011; etas[2] = 0.585410196624968; etas[3] = 0.138196601125011; zetas[0] = 0.138196601125011; zetas[1] = 0.138196601125011; zetas[2] = 0.138196601125011; zetas[3] = 0.585410196624968; wts[0] = 0.04166666666667; wts[1] = 0.04166666666667; wts[2] = 0.04166666666667; wts[3] = 0.04166666666667; // assemble nodal coordinates into xl // when fortran gets it, it will be transposed and the indices increased by 1 for (ii=0; ii wrote: > > > On Tue 21. Apr 2020 at 16:47, Matthew Knepley wrote: > >> You are overwriting memory somewhere. The prints just move it around. I >> suggest running with valgrind. >> > > Matt is right. However, judging by the code snippet I bet all the arrays > in question are statically allocated, thus valgrind may be of somewhat > limited use. > > If you send the entire function, or all the related pieces of code, > someone in the list might spot the error. > > Thanks > Dave > > > > >> Thanks, >> >> Matt >> >> On Tue, Apr 21, 2020 at 10:44 AM Kaushik Vijaykumar < >> kaushikv318 at gmail.com> wrote: >> >>> Hello group, >>> >>> I have been trying to navigate a weird error that I have found in my FEA >>> code that I am developing using PetSc. The error occurs when, i execute a >>> call to stiffness generation of a Tetrahederal element. The code returns an >>> memory error if I don't print the following statements in the function, >>> (see the ierr print statements below): >>> >>> for (i1=1; i1<4; i1++) // Loop 6b >>> { >>> for (j1=1; j1<4; j1++) // Loop 7b >>> { >>> for (k1=1; k1<4; k1++) // Loop 8 >>> { >>> for (l1=1; l1<4; l1++) // Loop 9 >>> { >>> s[ii1+i1-1][jj1+j1-1] = >>> s[ii1+i1-1][jj1+j1-1]+C4[i1][k1][j1][l1]*w[k1][l1]*weight; >>> ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"C4 %f >>> \n",C4[i1][k1][j1][l1]);CHKERRQ(ierr); >>> ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"w %f >>> \n",w[k1][l1]);CHKERRQ(ierr); >>> ierr = PetscFPrintf(PETSC_COMM_WORLD,outfile,"weight %f >>> \n",weight);CHKERRQ(ierr); >>> } // Loop 9 >>> } // Loop 8 >>> } // Loop 7b >>> } // Loop 6b >>> >>> >>> Any help on this is really appreciated. >>> >>> Thanks >>> Kaushik >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbuerkle at web.de Wed Apr 22 00:11:33 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Wed, 22 Apr 2020 07:11:33 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: Message-ID: Hi, What is PetscObjectGetComm expected to return? I thought it would give the MPI communicator the object lives on. So if I create A matrix on PETSC_COMM_WORLD a call of PetscObjectGetComm for A it would return PETSC_COMM_WORLD? But it seems to return something else, and while most of the nodes return a similar communicator some are giving a different one. That said, is there a way to get the MPI communicator a matrix lives on? Best, Marius From patrick.sanan at gmail.com Wed Apr 22 00:50:17 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Wed, 22 Apr 2020 07:50:17 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: Message-ID: To confirm, are you casting A to PetscObject? e.g. MPI_Comm comm; /* ... */ PetscObjectGetComm((PetscObject)A,&comm); Am Mi., 22. Apr. 2020 um 07:11 Uhr schrieb Marius Buerkle : > Hi, > > What is PetscObjectGetComm expected to return? I thought it would give the > MPI communicator the object lives on. So if I create A matrix on > PETSC_COMM_WORLD a call of PetscObjectGetComm for A it would return > PETSC_COMM_WORLD? But it seems to return something else, and while most of > the nodes return a similar communicator some are giving a different one. > That said, is there a way to get the MPI communicator a matrix lives on? > > Best, > Marius > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbuerkle at web.de Wed Apr 22 00:58:44 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Wed, 22 Apr 2020 07:58:44 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Wed Apr 22 01:20:59 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Wed, 22 Apr 2020 08:20:59 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: Message-ID: On Wed 22. Apr 2020 at 07:11, Marius Buerkle wrote: > Hi, > > What is PetscObjectGetComm expected to return? As Patrick said, it returns the communicator associated with the petsc object. I thought it would give the MPI communicator the object lives on. So if I > create A matrix on PETSC_COMM_WORLD a call of PetscObjectGetComm for A it > would return PETSC_COMM_WORLD? But it seems to return something else, and > while most of the nodes return a similar communicator some are giving a > different one. How are you actually comparing the communicators (send code snippet)? Which MPI implementation are you using? And when are comparing comms is the comparison code written in C it FORTRAN? That said, is there a way to get the MPI communicator a matrix lives on? You are using the correct function. There is a macro as well but it?s best to use the function. Thanks, Dave > > Best, > Marius > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbuerkle at web.de Wed Apr 22 01:40:19 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Wed, 22 Apr 2020 08:40:19 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_comm.tar.gz Type: application/octet-stream Size: 911 bytes Desc: not available URL: From jroman at dsic.upv.es Wed Apr 22 01:47:47 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 22 Apr 2020 08:47:47 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: Message-ID: <70717E72-B041-47CE-B259-6DAAE142B326@dsic.upv.es> PETSc creates a duplicate of the communicator during object creation. https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscCommDuplicate.html Jose > El 22 abr 2020, a las 8:40, Marius Buerkle escribi?: > > Hi Dave, > > I want to use it in Fortran if possible. But I tried both C and Fortran just to see if it works in general. I am using MPICH 3.3.2. I attached the MWE for C and Fortran with the output I get. > > Marius > > > > > > Hi, > > What is PetscObjectGetComm expected to return? > > As Patrick said, it returns the communicator associated with the petsc object. > > I thought it would give the MPI communicator the object lives on. So if I create A matrix on PETSC_COMM_WORLD a call of PetscObjectGetComm for A it would return PETSC_COMM_WORLD? But it seems to return something else, and while most of the nodes return a similar communicator some are giving a different one. > > How are you actually comparing the communicators (send code snippet)? Which MPI implementation are you using? And when are comparing comms is the comparison code written in C it FORTRAN? > > > That said, is there a way to get the MPI communicator a matrix lives on? > > You are using the correct function. There is a macro as well but it?s best to use the function. > > Thanks, > Dave > > > > > Best, > Marius > From mbuerkle at web.de Wed Apr 22 02:06:48 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Wed, 22 Apr 2020 09:06:48 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: <70717E72-B041-47CE-B259-6DAAE142B326@dsic.upv.es> References: <70717E72-B041-47CE-B259-6DAAE142B326@dsic.upv.es> Message-ID: An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 22 08:27:02 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 22 Apr 2020 09:27:02 -0400 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: <70717E72-B041-47CE-B259-6DAAE142B326@dsic.upv.es> Message-ID: On Wed, Apr 22, 2020 at 3:07 AM Marius Buerkle wrote: > I see, but I am still puzzeled, why are the communicators different on > different notes eventhough it is the same object. > This is the output of MPI_Comm_dup() on line 126 of tagm.c. Therefore, dup comms are not guaranteed to have the same id across multiple processes. Thanks, Matt > > > PETSc creates a duplicate of the communicator during object creation. > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscCommDuplicate.html > > Jose > > > > El 22 abr 2020, a las 8:40, Marius Buerkle escribi?: > > > > Hi Dave, > > > > I want to use it in Fortran if possible. But I tried both C and Fortran > just to see if it works in general. I am using MPICH 3.3.2. I attached the > MWE for C and Fortran with the output I get. > > > > Marius > > > > > > > > > > > > Hi, > > > > What is PetscObjectGetComm expected to return? > > > > As Patrick said, it returns the communicator associated with the petsc > object. > > > > I thought it would give the MPI communicator the object lives on. So if > I create A matrix on PETSC_COMM_WORLD a call of PetscObjectGetComm for A it > would return PETSC_COMM_WORLD? But it seems to return something else, and > while most of the nodes return a similar communicator some are giving a > different one. > > > > How are you actually comparing the communicators (send code snippet)? > Which MPI implementation are you using? And when are comparing comms is the > comparison code written in C it FORTRAN? > > > > > > That said, is there a way to get the MPI communicator a matrix lives on? > > > > You are using the correct function. There is a macro as well but it?s > best to use the function. > > > > Thanks, > > Dave > > > > > > > > > > Best, > > Marius > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Wed Apr 22 08:55:16 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Wed, 22 Apr 2020 15:55:16 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: <70717E72-B041-47CE-B259-6DAAE142B326@dsic.upv.es> Message-ID: Perhaps the confusion here is related to the fact that an MPI_Comm is not an integer identifying the communicator. Rather, it's a pointer to a data structure which contains information about the communicator (I'm not positive but probably something like this ). You're converting that pointer to an int and printing it out. The value happens to be the same on all ranks except 0, but this doesn't directly tell you anything about equality of the MPI_comm objects that those pointers point to. Am Mi., 22. Apr. 2020 um 15:28 Uhr schrieb Matthew Knepley < knepley at gmail.com>: > On Wed, Apr 22, 2020 at 3:07 AM Marius Buerkle wrote: > >> I see, but I am still puzzeled, why are the communicators different on >> different notes eventhough it is the same object. >> > > This is the output of MPI_Comm_dup() on line 126 of tagm.c. Therefore, dup > comms are not guaranteed to have the same id > across multiple processes. > > Thanks, > > Matt > > >> >> >> PETSc creates a duplicate of the communicator during object creation. >> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscCommDuplicate.html >> >> Jose >> >> >> > El 22 abr 2020, a las 8:40, Marius Buerkle escribi?: >> > >> > Hi Dave, >> > >> > I want to use it in Fortran if possible. But I tried both C and Fortran >> just to see if it works in general. I am using MPICH 3.3.2. I attached the >> MWE for C and Fortran with the output I get. >> > >> > Marius >> > >> > >> > >> > >> > >> > Hi, >> > >> > What is PetscObjectGetComm expected to return? >> > >> > As Patrick said, it returns the communicator associated with the petsc >> object. >> > >> > I thought it would give the MPI communicator the object lives on. So if >> I create A matrix on PETSC_COMM_WORLD a call of PetscObjectGetComm for A it >> would return PETSC_COMM_WORLD? But it seems to return something else, and >> while most of the nodes return a similar communicator some are giving a >> different one. >> > >> > How are you actually comparing the communicators (send code snippet)? >> Which MPI implementation are you using? And when are comparing comms is the >> comparison code written in C it FORTRAN? >> > >> > >> > That said, is there a way to get the MPI communicator a matrix lives on? >> > >> > You are using the correct function. There is a macro as well but it?s >> best to use the function. >> > >> > Thanks, >> > Dave >> > >> > >> > >> > >> > Best, >> > Marius >> > >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Apr 22 11:03:33 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 22 Apr 2020 11:03:33 -0500 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: <70717E72-B041-47CE-B259-6DAAE142B326@dsic.upv.es> Message-ID: MPI_Comm are opaque handles in C and integers in Fortran, which is required by MPI standard. The same applies to other types, like MPI_Op, MPI_Win etc. MPICH and OpenMPI have different implements for the handles. In MPICH handles are integer/bitfield, with some bits being offset to an array of objects. This makes it easy to do things like MPI_Comm_f2c(). In OpenMPI handles are pointers. OpenMPI has to transform pointers to integer offsets in MPI_Comm_c2f(). Running your tests with OpenMPI, you can see different pointers but same offsets test_comms.c: 0 4 -1419258464 909680992 909680992 1 4 1152255392 -2144517440 -2144517440 2 4 -306719328 768197312 768197312 3 4 -1766709856 715374384 715374384 test_comms.f90: 0 0 3 1 0 3 2 0 3 3 0 3 Running with MPICH, you can see C/Fortran MPI_Comm's are the same. But why ranks do not have the same integer/bitfield, I don't know. You need to dig into mpich code. test_comms.c: 0 4 1140850688 -2080374780 -2080374780 1 4 1140850688 -2080374780 -2080374780 2 4 1140850688 -2080374782 -2080374782 3 4 1140850688 -2080374782 -2080374782 test_comms.f90: 0 1140850688 -2080374780 1 1140850688 -2080374780 2 1140850688 -2080374782 3 1140850688 -2080374782 In summary, users should not expect MPI_Comm variables are equal across ranks, and MPI_Send an MPI_Comm variable to remote ranks. --Junchao Zhang On Wed, Apr 22, 2020 at 8:56 AM Patrick Sanan wrote: > Perhaps the confusion here is related to the fact that an MPI_Comm is not > an integer identifying the communicator. Rather, > it's a pointer to a data structure which contains information about the > communicator (I'm not positive but probably something like this > > ). > > You're converting that pointer to an int and printing it out. The value > happens to be the same on all ranks except 0, but this > doesn't directly tell you anything about equality of the MPI_comm objects > that those pointers point to. > > Am Mi., 22. Apr. 2020 um 15:28 Uhr schrieb Matthew Knepley < > knepley at gmail.com>: > >> On Wed, Apr 22, 2020 at 3:07 AM Marius Buerkle wrote: >> >>> I see, but I am still puzzeled, why are the communicators different on >>> different notes eventhough it is the same object. >>> >> >> This is the output of MPI_Comm_dup() on line 126 of tagm.c. Therefore, >> dup comms are not guaranteed to have the same id >> across multiple processes. >> >> Thanks, >> >> Matt >> >> >>> >>> >>> PETSc creates a duplicate of the communicator during object creation. >>> >>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscCommDuplicate.html >>> >>> Jose >>> >>> >>> > El 22 abr 2020, a las 8:40, Marius Buerkle escribi?: >>> > >>> > Hi Dave, >>> > >>> > I want to use it in Fortran if possible. But I tried both C and >>> Fortran just to see if it works in general. I am using MPICH 3.3.2. I >>> attached the MWE for C and Fortran with the output I get. >>> > >>> > Marius >>> > >>> > >>> > >>> > >>> > >>> > Hi, >>> > >>> > What is PetscObjectGetComm expected to return? >>> > >>> > As Patrick said, it returns the communicator associated with the petsc >>> object. >>> > >>> > I thought it would give the MPI communicator the object lives on. So >>> if I create A matrix on PETSC_COMM_WORLD a call of PetscObjectGetComm for A >>> it would return PETSC_COMM_WORLD? But it seems to return something else, >>> and while most of the nodes return a similar communicator some are giving a >>> different one. >>> > >>> > How are you actually comparing the communicators (send code snippet)? >>> Which MPI implementation are you using? And when are comparing comms is the >>> comparison code written in C it FORTRAN? >>> > >>> > >>> > That said, is there a way to get the MPI communicator a matrix lives >>> on? >>> > >>> > You are using the correct function. There is a macro as well but it?s >>> best to use the function. >>> > >>> > Thanks, >>> > Dave >>> > >>> > >>> > >>> > >>> > Best, >>> > Marius >>> > >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Wed Apr 22 17:46:24 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 22 Apr 2020 16:46:24 -0600 Subject: [petsc-users] Fwd: Installation problem when Configuring SUPERLU_DIST In-Reply-To: <1fa97190-389c-4c2a-b23d-8a741ba12651@googlegroups.com> References: <1fa97190-389c-4c2a-b23d-8a741ba12651@googlegroups.com> Message-ID: We did not get a stack back this time. Let us ping PETSc guys. Thanks, Fande, ---------- Forwarded message --------- From: Zile Wang Date: Tue, Apr 21, 2020 at 10:55 AM Subject: Re: Installation problem when Configuring SUPERLU_DIST To: moose-users Thanks, Fande. I removed the packages. However, the same problem. It seems that "*Configuring SUPERLU_DIST" *does not work. There is no error message in the configure.log file. Am I missing something? Zi-Le Wang -- You received this message because you are subscribed to the Google Groups "moose-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to moose-users+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/1fa97190-389c-4c2a-b23d-8a741ba12651%40googlegroups.com . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ================================================================================ ================================================================================ Starting configure run at Wed, 22 Apr 2020 00:37:15 +0800 Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --prefix=/home/wangzl/moose-compilers/petsc-3.11.4 --with-debugging=0 --with-ssl=0 --with-pic=1 --with-openmp=1 --with-mpi=1 --with-shared-libraries=1 --with-cxx-dialect=C++11 --with-fortran-bindings=0 --with-sowing=0 --download-hypre=/home/wangzl/packages/hypre-2.15.1.tar.gz --download-fblaslapack=/home/wangzl/packages/fblaslapack-3.4.2.tar.gz --download-metis=/home/wangzl/packages/petsc-pkg-metis-49e61501c498.tar.gz --download-ptscotch=/home/wangzl/packages/petsc-pkg-scotch-c15036faac5f.tar.gz --download-parmetis=/home/wangzl/packages/petsc-pkg-parmetis-73dab469aa36.tar.gz --download-scalapack=/home/wangzl/packages/petsc-pkg-scalapack-459410776585.tar.gz --download-mumps=/home/wangzl/packages/petsc-pkg-mumps-5fe5b9e56f78.tar.gz --download-slepc=/home/wangzl/packages/slepc --download-superlu_dist=/home/wangzl/packages/superlu_dist-6.1.1.tar.gz PETSC_DIR=/tmp/stack_temp.rFVgkc/petsc-3.11.4 PETSC_ARCH=linux-opt Working directory: /tmp/stack_temp.rFVgkc/petsc-3.11.4 Machine platform: uname_result(system='Linux', node='w003', release='5.3.18-lp152.10-default', version='#1 SMP Wed Apr 1 11:48:53 UTC 2020 (ccee559)', machine='x86_64', processor='x86_64') Python version: 3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:16:01) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] ================================================================================ ================================================================================ TEST configureExternalPackagesDir from config.framework(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/framework.py:830) TESTING: configureExternalPackagesDir from config.framework(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/framework.py:830) ================================================================================ TEST configureDebuggers from config.utilities.debuggers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/debuggers.py:21) TESTING: configureDebuggers from config.utilities.debuggers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/debuggers.py:21) Find a default debugger and determine its arguments Checking for program /home/wangzl/miniconda3/bin/gdb...not found Checking for program /home/wangzl/projects/moose/python/peacock/gdb...not found Checking for program /home/wangzl/software/fftwmpi/bin/gdb...not found Checking for program /home/wangzl/software/byacc/bin/gdb...not found Checking for program /home/wangzl/software/m4/bin/gdb...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/gdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/gdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/gdb...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/gdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/gdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/gdb...not found Checking for program /home/wangzl/miniconda3/bin/gdb...not found Checking for program /home/wangzl/projects/moose/python/peacock/gdb...not found Checking for program /home/wangzl/software/fftwmpi/bin/gdb...not found Checking for program /home/wangzl/software/byacc/bin/gdb...not found Checking for program /home/wangzl/software/m4/bin/gdb...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/gdb...not found Checking for program /home/wangzl/bin/gdb...not found Checking for program /usr/local/bin/gdb...not found Checking for program /usr/bin/gdb...found Defined make macro "GDB" to "/usr/bin/gdb" Checking for program /home/wangzl/miniconda3/bin/dbx...not found Checking for program /home/wangzl/projects/moose/python/peacock/dbx...not found Checking for program /home/wangzl/software/fftwmpi/bin/dbx...not found Checking for program /home/wangzl/software/byacc/bin/dbx...not found Checking for program /home/wangzl/software/m4/bin/dbx...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/dbx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/dbx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/dbx...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/dbx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/dbx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/dbx...not found Checking for program /home/wangzl/miniconda3/bin/dbx...not found Checking for program /home/wangzl/projects/moose/python/peacock/dbx...not found Checking for program /home/wangzl/software/fftwmpi/bin/dbx...not found Checking for program /home/wangzl/software/byacc/bin/dbx...not found Checking for program /home/wangzl/software/m4/bin/dbx...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/dbx...not found Checking for program /home/wangzl/bin/dbx...not found Checking for program /usr/local/bin/dbx...not found Checking for program /usr/bin/dbx...not found Checking for program /bin/dbx...not found Checking for program /opt/pbs/bin/dbx...not found Checking for program /home/apps/dbx...not found Checking for program /home/wangzl/miniconda3/bin/xdb...not found Checking for program /home/wangzl/projects/moose/python/peacock/xdb...not found Checking for program /home/wangzl/software/fftwmpi/bin/xdb...not found Checking for program /home/wangzl/software/byacc/bin/xdb...not found Checking for program /home/wangzl/software/m4/bin/xdb...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/xdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/xdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/xdb...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/xdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/xdb...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/xdb...not found Checking for program /home/wangzl/miniconda3/bin/xdb...not found Checking for program /home/wangzl/projects/moose/python/peacock/xdb...not found Checking for program /home/wangzl/software/fftwmpi/bin/xdb...not found Checking for program /home/wangzl/software/byacc/bin/xdb...not found Checking for program /home/wangzl/software/m4/bin/xdb...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/xdb...not found Checking for program /home/wangzl/bin/xdb...not found Checking for program /usr/local/bin/xdb...not found Checking for program /usr/bin/xdb...not found Checking for program /bin/xdb...not found Checking for program /opt/pbs/bin/xdb...not found Checking for program /home/apps/xdb...not found Executing: uname -s stdout: Linux Defined make macro "DSYMUTIL" to "true" Defined "USE_GDB_DEBUGGER" to "1" ================================================================================ TEST configureGit from config.sourceControl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/sourceControl.py:24) TESTING: configureGit from config.sourceControl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/sourceControl.py:24) Find the Git executable Checking for program /home/wangzl/miniconda3/bin/git...not found Checking for program /home/wangzl/projects/moose/python/peacock/git...not found Checking for program /home/wangzl/software/fftwmpi/bin/git...not found Checking for program /home/wangzl/software/byacc/bin/git...not found Checking for program /home/wangzl/software/m4/bin/git...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/git...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/git...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/git...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/git...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/git...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/git...not found Checking for program /home/wangzl/miniconda3/bin/git...not found Checking for program /home/wangzl/projects/moose/python/peacock/git...not found Checking for program /home/wangzl/software/fftwmpi/bin/git...not found Checking for program /home/wangzl/software/byacc/bin/git...not found Checking for program /home/wangzl/software/m4/bin/git...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/git...not found Checking for program /home/wangzl/bin/git...not found Checking for program /usr/local/bin/git...not found Checking for program /usr/bin/git...found Defined make macro "GIT" to "git" Executing: git --version stdout: git version 2.25.0 ================================================================================ TEST configureMercurial from config.sourceControl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/sourceControl.py:35) TESTING: configureMercurial from config.sourceControl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/sourceControl.py:35) Find the Mercurial executable Checking for program /home/wangzl/miniconda3/bin/hg...not found Checking for program /home/wangzl/projects/moose/python/peacock/hg...not found Checking for program /home/wangzl/software/fftwmpi/bin/hg...not found Checking for program /home/wangzl/software/byacc/bin/hg...not found Checking for program /home/wangzl/software/m4/bin/hg...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/hg...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/hg...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/hg...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/hg...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/hg...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/hg...not found Checking for program /home/wangzl/miniconda3/bin/hg...not found Checking for program /home/wangzl/projects/moose/python/peacock/hg...not found Checking for program /home/wangzl/software/fftwmpi/bin/hg...not found Checking for program /home/wangzl/software/byacc/bin/hg...not found Checking for program /home/wangzl/software/m4/bin/hg...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/hg...not found Checking for program /home/wangzl/bin/hg...not found Checking for program /usr/local/bin/hg...not found Checking for program /usr/bin/hg...not found Checking for program /bin/hg...not found Checking for program /opt/pbs/bin/hg...not found Checking for program /home/apps/hg...not found ================================================================================ TEST configureDirectories from PETSc.options.petscdir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/petscdir.py:23) TESTING: configureDirectories from PETSc.options.petscdir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/petscdir.py:23) Checks PETSC_DIR and sets if not set Version Information: #define PETSC_VERSION_RELEASE 1 #define PETSC_VERSION_MAJOR 3 #define PETSC_VERSION_MINOR 11 #define PETSC_VERSION_SUBMINOR 4 #define PETSC_VERSION_PATCH 0 #define PETSC_VERSION_DATE "Sep, 28, 2019" #define PETSC_VERSION_GIT "v3.11.4" #define PETSC_VERSION_DATE_GIT "2019-09-28 13:30:42 -0500" #define PETSC_VERSION_EQ(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_ PETSC_VERSION_EQ #define PETSC_VERSION_LT(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_LE(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_GT(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_GE(MAJOR,MINOR,SUBMINOR) \ ================================================================================ TEST getDatafilespath from PETSc.options.dataFilesPath(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/dataFilesPath.py:29) TESTING: getDatafilespath from PETSc.options.dataFilesPath(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/dataFilesPath.py:29) Checks what DATAFILESPATH should be ================================================================================ TEST configureInstallationMethod from PETSc.options.petscclone(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/petscclone.py:20) TESTING: configureInstallationMethod from PETSc.options.petscclone(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/petscclone.py:20) This is a tarball installation ================================================================================ TEST setNativeArchitecture from PETSc.options.arch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/arch.py:25) TESTING: setNativeArchitecture from PETSc.options.arch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/arch.py:25) ================================================================================ TEST configureArchitecture from PETSc.options.arch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/arch.py:37) TESTING: configureArchitecture from PETSc.options.arch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/arch.py:37) Checks PETSC_ARCH and sets if not set ================================================================================ TEST setInstallDir from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:35) TESTING: setInstallDir from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:35) setup installDir to either prefix or if that is not set to PETSC_DIR/PETSC_ARCH Defined make macro "PREFIXDIR" to "/home/wangzl/moose-compilers/petsc-3.11.4" ================================================================================ TEST saveReconfigure from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:79) TESTING: saveReconfigure from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:79) ================================================================================ TEST cleanConfDir from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:72) TESTING: cleanConfDir from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:72) ================================================================================ TEST configureInstallDir from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:56) TESTING: configureInstallDir from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:56) Makes installDir subdirectories if it does not exist for both prefix install location and PETSc work install location Changed persistence directory to /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/lib/petsc/conf ================================================================================ TEST restoreReconfigure from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:92) TESTING: restoreReconfigure from PETSc.options.installDir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/installDir.py:92) ================================================================================ TEST setExternalPackagesDir from PETSc.options.externalpackagesdir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/externalpackagesdir.py:15) TESTING: setExternalPackagesDir from PETSc.options.externalpackagesdir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/externalpackagesdir.py:15) ================================================================================ TEST cleanExternalpackagesDir from PETSc.options.externalpackagesdir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/externalpackagesdir.py:22) TESTING: cleanExternalpackagesDir from PETSc.options.externalpackagesdir(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/externalpackagesdir.py:22) ================================================================================ TEST printEnvVariables from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1650) TESTING: printEnvVariables from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1650) **** printenv **** __LMOD_REF_COUNT_INFOPATH=/opt/intel/documentation_2019/en/debugger/gdb-ia/info:1 CSHEDIT=emacs LMOD_PKG=/usr/share/lmod/lmod XDG_RUNTIME_DIR=/run/user/1043 USER=wangzl http_proxy=http://127.0.0.1:25463 SSH_TTY=/dev/pts/0 UBUNTU_MENUPROXY=1 XAUTHLOCALHOSTNAME=w003 CXX=mpicxx HOSTTYPE=x86_64 ALSA_CONFIG_PATH=/etc/alsa-pulse.conf __LMOD_STACK_DAALROOT=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L2RhYWw= __LMOD_REF_COUNT_LOADEDMODULES=impi/19u5:1;mkl/19u5:1;intel/19u5:1 HOST=w003 SHELL=/bin/bash OSTYPE=linux MALLOC_PERTURB_=69 __LMOD_STACK_INTEL_PYTHONHOME=L29wdC9pbnRlbC9kZWJ1Z2dlcl8yMDE5L3B5dGhvbi9pbnRlbDY0Lw== __LMOD_REF_COUNT_PATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/bin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64:1;/opt/intel/debugger_2019/gdb/intel64/bin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin:1;/home/wangzl/miniconda3/bin:1;/home/wangzl/projects/moose/python/peacock:1;/home/wangzl/software/fftwmpi/bin:1;/home/wangzl/software/byacc/bin:1;/home/wangzl/software/m4/bin:1;/home/wangzl/software/flex-2.6.4/bin:1;/home/wangzl/bin:1;/usr/local/bin:1;/usr/bin:1;/bin:1;/opt/pbs/bin:1;/home/apps:1 F77=mpif77 LS_OPTIONS=-N --color=tty -T 0 INFOPATH=/opt/intel/documentation_2019/en/debugger/gdb-ia/info __LMOD_STACK_PSTLROOT=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L3BzdGw= PAGER=less LS_COLORS=no=00:fi=00:di=01;34:ln=00;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=41;33;01:ex=00;32:*.cmd=00;32:*.exe=01;32:*.com=01;32:*.bat=01;32:*.btm=01;32:*.dll=01;32:*.tar=00;31:*.tbz=00;31:*.tgz=00;31:*.rpm=00;31:*.deb=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.lzma=00;31:*.zip=00;31:*.zoo=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.tb2=00;31:*.tz2=00;31:*.tbz2=00;31:*.xz=00;31:*.avi=01;35:*.bmp=01;35:*.dl=01;35:*.fli=01;35:*.gif=01;35:*.gl=01;35:*.jpg=01;35:*.jpeg=01;35:*.mkv=01;35:*.mng=01;35:*.mov=01;35:*.mp4=01;35:*.mpg=01;35:*.pcx=01;35:*.pbm=01;35:*.pgm=01;35:*.png=01;35:*.ppm=01;35:*.svg=01;35:*.tga=01;35:*.tif=01;35:*.webm=01;35:*.webp=01;35:*.wmv=01;35:*.xbm=01;35:*.xcf=01;35:*.xpm=01;35:*.aiff=00;32:*.ape=00;32:*.au=00;32:*.flac=00;32:*.m4a=00;32:*.mid=00;32:*.mp3=00;32:*.mpc=00;32:*.ogg=00;32:*.voc=00;32:*.wav=00;32:*.wma=00;32:*.wv=00;32: HISTSIZE=1000 __LMOD_REF_COUNT_MANPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/man:1;/opt/intel/man/common:1;/opt/intel/documentation_2019/en/debugger/gdb-ia/man:1;/usr/local/man:1;/usr/share/man:1;/opt/pbs/share/man:1 __LMOD_STACK_FI_PROVIDER_PATH=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L21waS9pbnRlbDY0L2xpYmZhYnJpYy9saWIvcHJvdg== OLDPWD=/tmp/stack_temp.rFVgkc CPU=x86_64 TERM=xterm __LMOD_REF_COUNT_NLSPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64/locale/%l_%t/%N:1;/opt/intel/debugger_2019/gdb/intel64/share/locale/%l_%t/%N:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin/locale/%l_%t/%N:1 F90=mpif90 LESSOPEN=lessopen.sh %s QEMU_AUDIO_DRV=pa LIBGL_DEBUG=quiet LANG=en_US.UTF-8 MORE=-sl LESS_ADVANCED_PREPROCESSOR=no MODULEPATH=/opt/lmod/modulefiles:/usr/share/lmod/modulefiles BASH_FUNC_ml%%=() { eval $($LMOD_DIR/ml_cmd "$@") } LMOD_SETTARG_FULL_SUPPORT=no JAVA_BINDIR=/usr/lib64/jvm/jre-11-openjdk/bin LMOD_PREPEND_BLOCK=normal XKEYSYMDB=/usr/X11R6/lib/X11/XKeysymDB INTEL_LICENSE_FILE=/opt/intel/compilers_and_libraries_2019.5.281/linux/licenses:/opt/intel/licenses:/root/intel/licenses DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1043/bus MAIL=/var/spool/mail/wangzl MAN_KEEP_FORMATTING=yes SDL_AUDIODRIVER=pulse JAVA_HOME=/usr/lib64/jvm/jre-11-openjdk PKG_CONFIG_PATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/bin/pkgconfig SSH_CLIENT=59.66.60.150 7586 22 MPI_HOME=/home/wangzl/software/mpich-3.2.1 FI_PROVIDER_PATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib/prov XNLSPATH=/usr/share/X11/nls LMOD_SETTARG_CMD=: PACKAGES_DIR=/home/wangzl/moose-compilers FC=mpif90 PETSC_DIR=/home/wangzl/moose-compilers/petsc-3.11.4 QT_SYSTEM_DIR=/usr/share/desktop-data PYTHONSTARTUP=/etc/pythonstart __LMOD_STACK_MKLROOT=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L21rbA== PROFILEREAD=true HOME=/home/wangzl COLORTERM=1 LOGNAME=wangzl JAVA_ROOT=/usr/lib64/jvm/jre-11-openjdk XDG_CONFIG_DIRS=/etc/xdg AUDIODRIVER=pulseaudio LMOD_VERSION=8.3.6 LOADEDMODULES=impi/19u5:mkl/19u5:intel/19u5 PATH=/home/wangzl/miniconda3/bin:/home/wangzl/projects/moose/python/peacock:/home/wangzl/software/fftwmpi/bin:/home/wangzl/software/byacc/bin:/home/wangzl/software/m4/bin:/home/wangzl/software/flex-2.6.4/bin:/opt/intel/compilers_and_libraries_2019.5.281/linux/bin:/opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64:/opt/intel/debugger_2019/gdb/intel64/bin:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin:/home/wangzl/miniconda3/bin:/home/wangzl/projects/moose/python/peacock:/home/wangzl/software/fftwmpi/bin:/home/wangzl/software/byacc/bin:/home/wangzl/software/m4/bin:/home/wangzl/software/flex-2.6.4/bin:/home/wangzl/bin:/usr/local/bin:/usr/bin:/bin:/opt/pbs/bin:/home/apps LESS=-M -I -R LESSCLOSE=lessclose.sh %s %s _ModuleTable001_=X01vZHVsZVRhYmxlXz17WyJNVHZlcnNpb24iXT0zLFsiY19yZWJ1aWxkVGltZSJdPWZhbHNlLFsiY19zaG9ydFRpbWUiXT1mYWxzZSxkZXB0aFQ9e30sZmFtaWx5PXt9LG1UPXtpbXBpPXtbImZuIl09Ii9vcHQvbG1vZC9tb2R1bGVmaWxlcy9pbXBpLzE5dTUubHVhIixbImZ1bGxOYW1lIl09ImltcGkvMTl1NSIsWyJsb2FkT3JkZXIiXT0xLHByb3BUPXt9LFsic3RhY2tEZXB0aCJdPTEsWyJzdGF0dXMiXT0iYWN0aXZlIixbInVzZXJOYW1lIl09ImltcGkvMTl1NSIsfSxpbnRlbD17WyJmbiJdPSIvb3B0L2xtb2QvbW9kdWxlZmlsZXMvaW50ZWwvMTl1NS5sdWEiLFsiZnVsbE5hbWUiXT0iaW50ZWwvMTl1NSIsWyJsb2FkT3JkZXIiXT0zLHByb3BUPXt9LFsic3RhY2tEZXB0aCJd LMOD_COLORIZE=no GTK2_MODULES=unity-gtk-module LMOD_FULL_SETTARG_SUPPORT=no XDG_SESSION_ID=1368 G_BROKEN_FILENAMES=1 WINDOWMANAGER=/usr/bin/startplasma-x11 __LMOD_STACK_IPPROOT=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L2lwcA== SSH_CONNECTION=59.66.60.150 7586 166.111.26.45 22 G_FILENAME_ENCODING=@locale,UTF-8,GB2312,GB18030,GBK,ISO-8859-1 FROM_HEADER= XCRYSDEN_TOPDIR=/home/fujh/opt/xcrysden-1.5.60-bin-semishared __LMOD_REF_COUNT_CLASSPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/daal.jar:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/mpi.jar:1 _ModuleTable002_=PTAsWyJzdGF0dXMiXT0iYWN0aXZlIixbInVzZXJOYW1lIl09ImludGVsLzE5dTUiLH0sbWtsPXtbImZuIl09Ii9vcHQvbG1vZC9tb2R1bGVmaWxlcy9ta2wvMTl1NS5sdWEiLFsiZnVsbE5hbWUiXT0ibWtsLzE5dTUiLFsibG9hZE9yZGVyIl09Mixwcm9wVD17fSxbInN0YWNrRGVwdGgiXT0xLFsic3RhdHVzIl09ImFjdGl2ZSIsWyJ1c2VyTmFtZSJdPSJta2wvMTl1NSIsfSx9LG1wYXRoQT17Ii9vcHQvbG1vZC9tb2R1bGVmaWxlcyIsIi91c3Ivc2hhcmUvbG1vZC9tb2R1bGVmaWxlcyIsfSxbInN5c3RlbUJhc2VNUEFUSCJdPSIvdXNyL3NoYXJlL2xtb2QvbW9kdWxlZmlsZXMiLH0= LMOD_CMD=/usr/share/lmod/lmod/libexec/lmod I_MPI_ROOT=/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi LMOD_DIR=/usr/share/lmod/lmod/libexec MODULESHOME=/usr/share/lmod/lmod _=./configure __LMOD_REF_COUNT_LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64:1;/opt/intel/debugger_2019/libipt/intel64/lib:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib:1;/usr/lib64:1;/home/wangzl/software/fftwmpi/lib64:1;/home/wangzl/software/flex-2.6.4/lib64:1;/home/wangzl/software/jpeg-9c/lib:1 NLSPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64/locale/%l_%t/%N:/opt/intel/debugger_2019/gdb/intel64/share/locale/%l_%t/%N:/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin/locale/%l_%t/%N MACHTYPE=x86_64-suse-linux LIBRARY_PATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7:/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib:/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64:/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 __LMOD_STACK_INTEL_LICENSE_FILE=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L2xpY2Vuc2VzOi9vcHQvaW50ZWwvbGljZW5zZXM6L3Jvb3QvaW50ZWwvbGljZW5zZXM= BASH_FUNC_module%%=() { eval $($LMOD_CMD bash "$@") && eval $(${LMOD_SETTARG_CMD:-:} -s sh) } MKLROOT=/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl MINICOM=-c on STACK_SRC=/tmp/stack_temp.rFVgkc CONFIG_SITE=/usr/share/site/x86_64-unknown-linux-gnu DAALROOT=/opt/intel/compilers_and_libraries_2019.5.281/linux/daal _LMFILES_=/opt/lmod/modulefiles/impi/19u5.lua:/opt/lmod/modulefiles/mkl/19u5.lua:/opt/lmod/modulefiles/intel/19u5.lua __LMOD_STACK_TBBROOT=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L3RiYg== https_proxy=http://127.0.0.1:25463 CVS_RSH=ssh BASH_ENV=/usr/share/lmod/8.3.6/init/bash TBBROOT=/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb DISPLAY=localhost:11.0 __LMOD_REF_COUNT__LMFILES_=/opt/lmod/modulefiles/impi/19u5.lua:1;/opt/lmod/modulefiles/mkl/19u5.lua:1;/opt/lmod/modulefiles/intel/19u5.lua:1 LESSKEY=/etc/lesskey.bin CLASSPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/daal.jar:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/mpi.jar GTK3_MODULES=unity-gtk-module PWD=/tmp/stack_temp.rFVgkc/petsc-3.11.4 MANPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/man:/opt/intel/man/common:/opt/intel/documentation_2019/en/debugger/gdb-ia/man:/usr/local/man:/usr/share/man:/opt/pbs/share/man EDITOR=vim CC=mpicc LD_LIBRARY_PATH=/usr/lib64:/home/wangzl/software/fftwmpi/lib64:/home/wangzl/software/flex-2.6.4/lib64:/home/wangzl/software/jpeg-9c/lib:/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64:/opt/intel/debugger_2019/libipt/intel64/lib:/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4:/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7:/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib:/usr/lib64:/home/wangzl/software/fftwmpi/lib64:/home/wangzl/software/flex-2.6.4/lib64:/home/wangzl/software/jpeg-9c/lib:: MALLOC_CHECK_=3 XDG_DATA_DIRS=/home/wangzl/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share PSTLROOT=/opt/intel/compilers_and_libraries_2019.5.281/linux/pstl SHLVL=1 __LMOD_REF_COUNT_PKG_CONFIG_PATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/bin/pkgconfig:1 GPG_TTY=/dev/pts/0 __LMOD_REF_COUNT_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4:1 INTEL_PYTHONHOME=/opt/intel/debugger_2019/python/intel64/ LMOD_ROOT=/usr/share/lmod HOSTNAME=w003 JRE_HOME=/usr/lib64/jvm/java-11-openjdk-11 __LMOD_REF_COUNT_CPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/include:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/pstl/include:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/include:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/include:1;/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/include:1 IPPROOT=/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp CPATH=/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/include:/opt/intel/compilers_and_libraries_2019.5.281/linux/pstl/include:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/include:/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/include:/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/include __LMOD_STACK_I_MPI_ROOT=L29wdC9pbnRlbC9jb21waWxlcnNfYW5kX2xpYnJhcmllc18yMDE5LjUuMjgxL2xpbnV4L21waQ== __LMOD_REF_COUNT_MODULEPATH=/opt/lmod/modulefiles:1;/usr/share/lmod/modulefiles:1 _ModuleTable_Sz_=2 ================================================================================ TEST resetEnvCompilers from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1657) TESTING: resetEnvCompilers from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1657) =============================================================================== ***** WARNING: CC (set to mpicc) found in environment variables - ignoring use ./configure CC=$CC if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: CXX (set to mpicxx) found in environment variables - ignoring use ./configure CXX=$CXX if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: FC (set to mpif90) found in environment variables - ignoring use ./configure FC=$FC if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: F77 (set to mpif77) found in environment variables - ignoring use ./configure F77=$F77 if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: F90 (set to mpif90) found in environment variables - ignoring use ./configure F90=$F90 if you really want to use that value ****** =============================================================================== ================================================================================ TEST checkEnvCompilers from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1687) TESTING: checkEnvCompilers from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1687) ================================================================================ TEST checkMPICompilerOverride from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1620) TESTING: checkMPICompilerOverride from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1620) Check if --with-mpi-dir is used along with CC CXX or FC compiler options. This usually prevents mpi compilers from being used - so issue a warning ================================================================================ TEST requireMpiLdPath from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1641) TESTING: requireMpiLdPath from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1641) OpenMPI wrappers require LD_LIBRARY_PATH set ================================================================================ TEST checkVendor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:444) TESTING: checkVendor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:444) Determine the compiler vendor Compiler vendor is "" ================================================================================ TEST checkInitialFlags from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:454) TESTING: checkInitialFlags from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:454) Initialize the compiler and linker flags Initialized CFLAGS to Initialized CFLAGS to Initialized LDFLAGS to Initialized CUDAFLAGS to Initialized CUDAFLAGS to Initialized LDFLAGS to Initialized CXXFLAGS to Initialized CXX_CXXFLAGS to Initialized LDFLAGS to Initialized FFLAGS to Initialized FFLAGS to Initialized LDFLAGS to Initialized CPPFLAGS to Initialized CUDAPPFLAGS to -Wno-deprecated-gpu-targets Initialized CXXCPPFLAGS to Initialized CC_LINKER_FLAGS to [] Initialized CXX_LINKER_FLAGS to [] Initialized FC_LINKER_FLAGS to [] Initialized CUDAC_LINKER_FLAGS to [] Initialized sharedLibraryFlags to [] Initialized dynamicLibraryFlags to [] ================================================================================ TEST checkCCompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:587) TESTING: checkCCompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:587) Locate a functional C compiler Executing: mpicc --help stdout: Usage: gcc [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by gcc. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . Checking for program /home/wangzl/miniconda3/bin/mpicc...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpicc...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpicc...not found Checking for program /home/wangzl/software/byacc/bin/mpicc...not found Checking for program /home/wangzl/software/m4/bin/mpicc...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpicc...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc...found Defined make macro "CC" to "mpicc" All intermediate test results are stored in /tmp/petsc-wjcu960y All intermediate test results are stored in /tmp/petsc-wjcu960y/config.setCompilers Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Testing executable /tmp/petsc-wjcu960y/config.setCompilers/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest ================================================================================ TEST checkCPreprocessor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:620) TESTING: checkCPreprocessor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:620) Locate a functional C preprocessor Checking for program /home/wangzl/miniconda3/bin/mpicc...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpicc...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpicc...not found Checking for program /home/wangzl/software/byacc/bin/mpicc...not found Checking for program /home/wangzl/software/m4/bin/mpicc...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpicc...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc...found Defined make macro "CPP" to "mpicc -E" Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: ================================================================================ TEST checkCUDACompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:661) TESTING: checkCUDACompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:661) Locate a functional CUDA compiler ================================================================================ TEST checkCUDAPreprocessor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:701) TESTING: checkCUDAPreprocessor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:701) Locate a functional CUDA preprocessor ================================================================================ TEST checkCxxCompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:814) TESTING: checkCxxCompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:814) Locate a functional Cxx compiler Executing: mpicxx --help stdout: Usage: g++ [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by g++. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . Checking for program /home/wangzl/miniconda3/bin/mpicxx...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpicxx...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpicxx...not found Checking for program /home/wangzl/software/byacc/bin/mpicxx...not found Checking for program /home/wangzl/software/m4/bin/mpicxx...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpicxx...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicxx...found Defined make macro "CXX" to "mpicxx" Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Testing executable /tmp/petsc-wjcu960y/config.setCompilers/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest ================================================================================ TEST checkCxxPreprocessor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:852) TESTING: checkCxxPreprocessor from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:852) Locate a functional Cxx preprocessor Checking for program /home/wangzl/miniconda3/bin/mpicxx...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpicxx...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpicxx...not found Checking for program /home/wangzl/software/byacc/bin/mpicxx...not found Checking for program /home/wangzl/software/m4/bin/mpicxx...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpicxx...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpicxx...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicxx...found Defined make macro "CXXCPP" to "mpicxx -E" Executing: mpicxx -E -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Preprocess stderr before filtering:: Preprocess stderr after filtering:: ================================================================================ TEST checkFortranCompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:970) TESTING: checkFortranCompiler from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:970) Locate a functional Fortran compiler Executing: mpif90 --help stdout: Usage: gfortran [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by gfortran. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . Checking for program /home/wangzl/miniconda3/bin/mpif90...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpif90...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpif90...not found Checking for program /home/wangzl/software/byacc/bin/mpif90...not found Checking for program /home/wangzl/software/m4/bin/mpif90...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpif90...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpif90...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpif90...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpif90...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpif90...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpif90...found Defined make macro "FC" to "mpif90" Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Testing executable /tmp/petsc-wjcu960y/config.setCompilers/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest ================================================================================ TEST checkFortranComments from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:991) TESTING: checkFortranComments from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:991) Make sure fortran comment "!" works Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main ! comment end Fortran comments can use ! in column 1 ================================================================================ TEST checkLargeFileIO from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1121) TESTING: checkLargeFileIO from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1121) ================================================================================ TEST checkArchiver from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1220) TESTING: checkArchiver from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1220) Check that the archiver exists and can make a library usable by the compiler Executing: ar -V stdout: GNU ar (GNU Binutils; openSUSE Leap 15.2) 2.34.0.20200325-lp152.1 Copyright (C) 2020 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Executing: ar -V stdout: GNU ar (GNU Binutils; openSUSE Leap 15.2) 2.34.0.20200325-lp152.1 Copyright (C) 2020 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Defined make macro "FAST_AR_FLAGS" to "Scq" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int foo(int a) { return a+1; } Checking for program /home/wangzl/miniconda3/bin/ar...not found Checking for program /home/wangzl/projects/moose/python/peacock/ar...not found Checking for program /home/wangzl/software/fftwmpi/bin/ar...not found Checking for program /home/wangzl/software/byacc/bin/ar...not found Checking for program /home/wangzl/software/m4/bin/ar...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/ar...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/ar...not found Checking for program /home/wangzl/miniconda3/bin/ar...not found Checking for program /home/wangzl/projects/moose/python/peacock/ar...not found Checking for program /home/wangzl/software/fftwmpi/bin/ar...not found Checking for program /home/wangzl/software/byacc/bin/ar...not found Checking for program /home/wangzl/software/m4/bin/ar...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ar...not found Checking for program /home/wangzl/bin/ar...not found Checking for program /usr/local/bin/ar...not found Checking for program /usr/bin/ar...found Defined make macro "AR" to "/usr/bin/ar" Checking for program /home/wangzl/miniconda3/bin/ranlib...not found Checking for program /home/wangzl/projects/moose/python/peacock/ranlib...not found Checking for program /home/wangzl/software/fftwmpi/bin/ranlib...not found Checking for program /home/wangzl/software/byacc/bin/ranlib...not found Checking for program /home/wangzl/software/m4/bin/ranlib...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/ranlib...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/ranlib...not found Checking for program /home/wangzl/miniconda3/bin/ranlib...not found Checking for program /home/wangzl/projects/moose/python/peacock/ranlib...not found Checking for program /home/wangzl/software/fftwmpi/bin/ranlib...not found Checking for program /home/wangzl/software/byacc/bin/ranlib...not found Checking for program /home/wangzl/software/m4/bin/ranlib...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ranlib...not found Checking for program /home/wangzl/bin/ranlib...not found Checking for program /usr/local/bin/ranlib...not found Checking for program /usr/bin/ranlib...found Defined make macro "RANLIB" to "/usr/bin/ranlib -c" Executing: /usr/bin/ar cr /tmp/petsc-wjcu960y/config.setCompilers/libconf1.a /tmp/petsc-wjcu960y/config.setCompilers/conf1.o Executing: /usr/bin/ranlib -c /tmp/petsc-wjcu960y/config.setCompilers/libconf1.a Possible ERROR while running ranlib: stderr: /usr/bin/ranlib: invalid option -- 'c' Ranlib is not functional with your archiver. Try --with-ranlib=true if ranlib is unnecessary. Executing: ar -V stdout: GNU ar (GNU Binutils; openSUSE Leap 15.2) 2.34.0.20200325-lp152.1 Copyright (C) 2020 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Executing: ar -V stdout: GNU ar (GNU Binutils; openSUSE Leap 15.2) 2.34.0.20200325-lp152.1 Copyright (C) 2020 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Defined make macro "FAST_AR_FLAGS" to "Scq" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int foo(int a) { return a+1; } Checking for program /home/wangzl/miniconda3/bin/ar...not found Checking for program /home/wangzl/projects/moose/python/peacock/ar...not found Checking for program /home/wangzl/software/fftwmpi/bin/ar...not found Checking for program /home/wangzl/software/byacc/bin/ar...not found Checking for program /home/wangzl/software/m4/bin/ar...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/ar...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/ar...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/ar...not found Checking for program /home/wangzl/miniconda3/bin/ar...not found Checking for program /home/wangzl/projects/moose/python/peacock/ar...not found Checking for program /home/wangzl/software/fftwmpi/bin/ar...not found Checking for program /home/wangzl/software/byacc/bin/ar...not found Checking for program /home/wangzl/software/m4/bin/ar...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ar...not found Checking for program /home/wangzl/bin/ar...not found Checking for program /usr/local/bin/ar...not found Checking for program /usr/bin/ar...found Defined make macro "AR" to "/usr/bin/ar" Checking for program /home/wangzl/miniconda3/bin/ranlib...not found Checking for program /home/wangzl/projects/moose/python/peacock/ranlib...not found Checking for program /home/wangzl/software/fftwmpi/bin/ranlib...not found Checking for program /home/wangzl/software/byacc/bin/ranlib...not found Checking for program /home/wangzl/software/m4/bin/ranlib...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/ranlib...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/ranlib...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/ranlib...not found Checking for program /home/wangzl/miniconda3/bin/ranlib...not found Checking for program /home/wangzl/projects/moose/python/peacock/ranlib...not found Checking for program /home/wangzl/software/fftwmpi/bin/ranlib...not found Checking for program /home/wangzl/software/byacc/bin/ranlib...not found Checking for program /home/wangzl/software/m4/bin/ranlib...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ranlib...not found Checking for program /home/wangzl/bin/ranlib...not found Checking for program /usr/local/bin/ranlib...not found Checking for program /usr/bin/ranlib...found Defined make macro "RANLIB" to "/usr/bin/ranlib" Executing: /usr/bin/ar cr /tmp/petsc-wjcu960y/config.setCompilers/libconf1.a /tmp/petsc-wjcu960y/config.setCompilers/conf1.o Executing: /usr/bin/ranlib /tmp/petsc-wjcu960y/config.setCompilers/libconf1.a Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" extern int foo(int); int main() { int b = foo(1); if (b); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -L/tmp/petsc-wjcu960y/config.setCompilers -lconf1 Defined make macro "AR_FLAGS" to "cr" Defined make macro "AR_LIB_SUFFIX" to "a" ================================================================================ TEST checkSharedLinker from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1334) TESTING: checkSharedLinker from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1334) Check that the linker can produce shared libraries Executing: uname -s stdout: Linux Checking shared linker mpicc using flags ['-shared'] Checking for program /home/wangzl/miniconda3/bin/mpicc...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpicc...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpicc...not found Checking for program /home/wangzl/software/byacc/bin/mpicc...not found Checking for program /home/wangzl/software/m4/bin/mpicc...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpicc...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc...found Defined make macro "LD_SHARED" to "mpicc" Trying C compiler flag Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -shared /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Valid C linker flag -shared Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.setCompilers/conftest.o: relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC collect2: error: ld returned 1 exit status Rejected C compiler flag because it was not compatible with shared linker mpicc using flags ['-shared'] Trying C compiler flag -fPIC Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -fPIC Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Valid C linker flag -shared Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int foo(void); int main() { int ret = foo(); if (ret) {} ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -L/tmp/petsc-wjcu960y/config.setCompilers -lconftest Using shared linker mpicc with flags ['-shared'] and library extension so Executing: uname -s stdout: Linux ================================================================================ TEST checkPIC from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1069) TESTING: checkPIC from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1069) Determine the PIC option for each compiler Trying C for PIC code without any compiler flag Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Accepted C PIC code without compiler flag Trying Cxx for PIC code without any compiler flag Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.setCompilers/conftest.o: relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC collect2: error: ld returned 1 exit status Rejected Cxx compiler flag because shared linker cannot handle it Trying Cxx compiler flag -fPIC for PIC code Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -fPIC Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Accepted Cxx compiler flag -fPIC for PIC code Trying FC for PIC code without any compiler flag Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: function foo(a) real:: a,x,bar common /xx/ x x=a foo = bar(x) end Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.setCompilers/conftest.o: relocation R_X86_64_32 against symbol `xx_' can not be used when making a shared object; recompile with -fPIC collect2: error: ld returned 1 exit status Rejected FC compiler flag because shared linker cannot handle it Trying FC compiler flag -fPIC for PIC code Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -fPIC Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: function foo(a) real:: a,x,bar common /xx/ x x=a foo = bar(x) end Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Accepted FC compiler flag -fPIC for PIC code ================================================================================ TEST checkSharedLinkerPaths from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1428) TESTING: checkSharedLinkerPaths from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1428) Determine the shared linker path options - IRIX: -rpath - Linux, OSF: -Wl,-rpath, - Solaris: -R - FreeBSD: -Wl,-R, Executing: uname -s stdout: Linux Executing: mpicc -V Trying C linker flag -Wl,-rpath, Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -Wl,-rpath,/tmp/stack_temp.rFVgkc/petsc-3.11.4 -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Valid C linker flag -Wl,-rpath,/tmp/stack_temp.rFVgkc/petsc-3.11.4 Executing: uname -s stdout: Linux Executing: mpicc -V Trying Cxx linker flag -Wl,-rpath, Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -Wl,-rpath,/tmp/stack_temp.rFVgkc/petsc-3.11.4 /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Valid Cxx linker flag -Wl,-rpath,/tmp/stack_temp.rFVgkc/petsc-3.11.4 Executing: uname -s stdout: Linux Executing: mpicc -V Trying FC linker flag -Wl,-rpath, Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -Wl,-rpath,/tmp/stack_temp.rFVgkc/petsc-3.11.4 -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Valid FC linker flag -Wl,-rpath,/tmp/stack_temp.rFVgkc/petsc-3.11.4 ================================================================================ TEST checkLibC from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1463) TESTING: checkLibC from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1463) Test whether we need to explicitly include libc in shared linking - Mac OSX requires an explicit reference to libc for shared linking Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int foo(void) {void *chunk = malloc(31); free(chunk); return 0;} Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o Shared linking does not require an explicit libc reference ================================================================================ TEST checkDynamicLinker from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1512) TESTING: checkDynamicLinker from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1512) Check that the linker can dynamicaly load shared libraries Checking for header: dlfcn.h All intermediate test results are stored in /tmp/petsc-wjcu960y/config.headers Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_DLFCN_H" to "1" Checking for functions [dlopen dlsym dlclose] in library ['dl'] [] All intermediate test results are stored in /tmp/petsc-wjcu960y/config.libraries Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char dlopen(); static void _check_dlopen() { dlopen(); } char dlsym(); static void _check_dlsym() { dlsym(); } char dlclose(); static void _check_dlclose() { dlclose(); } int main() { _check_dlopen(); _check_dlsym(); _check_dlclose();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC /tmp/petsc-wjcu960y/config.libraries/conftest.o -ldl Defined "HAVE_LIBDL" to "1" Adding ['dl'] to LIBS Executing: uname -s stdout: Linux Checking dynamic linker mpicc using flags ['-shared'] Checking for program /home/wangzl/miniconda3/bin/mpicc...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpicc...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpicc...not found Checking for program /home/wangzl/software/byacc/bin/mpicc...not found Checking for program /home/wangzl/software/m4/bin/mpicc...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpicc...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpicc...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc...found Defined make macro "DYNAMICLINKER" to "mpicc" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -ldl Valid C linker flag -shared Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int foo(void) {printf("test");return 0;} Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.setCompilers/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:11:3: warning: implicit declaration of function ???printf??? [-Wimplicit-function-declaration] printf("Could not load symbol\n"); ^~~~~~ /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:11:3: warning: incompatible implicit declaration of built-in function ???printf??? /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:11:3: note: include ?????? or provide a declaration of ???printf??? /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:15:3: warning: incompatible implicit declaration of built-in function ???printf??? printf("Invalid return from foo()\n"); ^~~~~~ /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:15:3: note: include ?????? or provide a declaration of ???printf??? /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:19:3: warning: incompatible implicit declaration of built-in function ???printf??? printf("Could not close library\n"); ^~~~~~ /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:19:3: note: include ?????? or provide a declaration of ???printf??? Source: #include "confdefs.h" #include "conffix.h" #include int main() { void *handle = dlopen("/tmp/petsc-wjcu960y/config.setCompilers/libconftest.so", 0); int (*foo)(void) = (int (*)(void)) dlsym(handle, "foo"); if (!foo) { printf("Could not load symbol\n"); return -1; } if ((*foo)()) { printf("Invalid return from foo()\n"); return -1; } if (dlclose(handle)) { printf("Could not close library\n"); return -1; } ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -ldl Using dynamic linker mpicc with flags ['-shared'] and library extension so ================================================================================ TEST output from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1564) TESTING: output from config.setCompilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/setCompilers.py:1564) Output module data as defines and substitutions Substituting "CC" with "mpicc" Substituting "CFLAGS" with " -fPIC" Defined make macro "CC_LINKER_SLFLAG" to "-Wl,-rpath," Substituting "CPP" with "mpicc -E" Substituting "CPPFLAGS" with "" Substituting "CXX" with "mpicxx" Substituting "CXX_CXXFLAGS" with " -fPIC" Substituting "CXXFLAGS" with "" Substituting "CXX_LINKER_SLFLAG" with "-Wl,-rpath," Substituting "CXXCPP" with "mpicxx -E" Substituting "CXXCPPFLAGS" with "" Substituting "FC" with "mpif90" Substituting "FFLAGS" with " -fPIC" Defined make macro "FC_LINKER_SLFLAG" to "-Wl,-rpath," Substituting "LDFLAGS" with "" Substituting "LIBS" with "-ldl " Substituting "SHARED_LIBRARY_FLAG" with "-shared" ================================================================================ TEST checkSharedDynamicPicOptions from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:36) TESTING: checkSharedDynamicPicOptions from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:36) ================================================================================ TEST configureSharedLibraries from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:52) TESTING: configureSharedLibraries from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:52) Checks whether shared libraries should be used, for which you must - Specify --with-shared-libraries - Have found a working shared linker Defines PETSC_USE_SHARED_LIBRARIES if they are used Executing: uname -s stdout: Linux Defined make rule "shared_arch" with dependencies "shared_linux" and code [] Defined make macro "SONAME_FUNCTION" to "$(1).so.$(2)" Defined make macro "SL_LINKER_FUNCTION" to "-shared -Wl,-soname,$(call SONAME_FUNCTION,$(notdir $(1)),$(2))" Defined make macro "BUILDSHAREDLIB" to "yes" Defined "HAVE_SHARED_LIBRARIES" to "1" Defined "USE_SHARED_LIBRARIES" to "1" ================================================================================ TEST configureDynamicLibraries from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:96) TESTING: configureDynamicLibraries from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:96) Checks whether dynamic loading is available (with dlfcn.h and libdl) Defined "HAVE_DYNAMIC_LIBRARIES" to "1" ================================================================================ TEST configureSerializedFunctions from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:102) TESTING: configureSerializedFunctions from PETSc.options.sharedLibraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/sharedLibraries.py:102) Defines PETSC_SERIALIZE_FUNCTIONS if they are used Requires shared libraries ================================================================================ TEST configureIndexSize from PETSc.options.indexTypes(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/indexTypes.py:30) TESTING: configureIndexSize from PETSc.options.indexTypes(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/indexTypes.py:30) Defined make macro "PETSC_INDEX_SIZE" to "32" ================================================================================ TEST configureCompilerFlags from config.compilerFlags(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilerFlags.py:72) TESTING: configureCompilerFlags from config.compilerFlags(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilerFlags.py:72) Get the default compiler flags Defined make macro "MPICC_SHOW" to "gcc -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl" Trying C compiler flag -Wall Trying C compiler flag -Wwrite-strings Trying C compiler flag -Wno-strict-aliasing Trying C compiler flag -Wno-unknown-pragmas Trying C compiler flag -fstack-protector Trying C compiler flag -mfp16-format=ieee Rejected C compiler flag -mfp16-format=ieee Trying C compiler flag -fvisibility=hidden Defined make macro "MPICC_SHOW" to "gcc -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl" Trying C compiler flag -g Trying C compiler flag -O Defined make macro "MPICXX_SHOW" to "g++ -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpicxx -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl" Trying Cxx compiler flag -Wall Trying Cxx compiler flag -Wwrite-strings Trying Cxx compiler flag -Wno-strict-aliasing Trying Cxx compiler flag -Wno-unknown-pragmas Trying Cxx compiler flag -fstack-protector Trying Cxx compiler flag -fvisibility=hidden Defined make macro "MPICXX_SHOW" to "g++ -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpicxx -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl" Trying Cxx compiler flag -g Trying Cxx compiler flag -O Defined make macro "MPIFC_SHOW" to "gfortran -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl" Trying FC compiler flag -Wall Trying FC compiler flag -ffree-line-length-0 Trying FC compiler flag -Wno-unused-dummy-argument Defined make macro "MPIFC_SHOW" to "gfortran -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl" Trying FC compiler flag -g Trying FC compiler flag -O Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wall Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wwrite-strings Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wno-strict-aliasing Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wno-unknown-pragmas Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -fstack-protector Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -mfp16-format=ieee /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Possible ERROR while running compiler: exit code 1 stderr: gcc: error: unrecognized command line option ???-mfp16-format=ieee??? Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Rejecting compiler flag -mfp16-format=ieee due to nonzero status from link Rejecting compiler flag -mfp16-format=ieee due to gcc: error: unrecognized command line option ???-mfp16-format=ieee??? PETSc Error: No output file produced Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -fvisibility=hidden Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -g Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -O Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wall Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wwrite-strings Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wno-strict-aliasing Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wno-unknown-pragmas Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -fstack-protector Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -fvisibility=hidden Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -g Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -O Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -Wall Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -ffree-line-length-0 Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -Wno-unused-dummy-argument Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -g Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -O Executing: mpicc --version stdout: gcc (SUSE Linux) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. getCompilerVersion: mpicc gcc (SUSE Linux) 7.5.0 Executing: mpicc -show stdout: gcc -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl Executing: mpicc --help stdout: Usage: gcc [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by gcc. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . Executing: uname -s stdout: Linux Executing: mpicc --help stdout: Usage: gcc [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by gcc. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . Executing: mpicc -show stdout: gcc -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl Executing: mpicc --help stdout: Usage: gcc [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by gcc. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . =============================================================================== ***** WARNING: Using default optimization C flags -g -O You might consider manually setting optimal optimization flags for your system with COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for examples =============================================================================== Executing: mpicxx --version stdout: g++ (SUSE Linux) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. getCompilerVersion: mpicxx g++ (SUSE Linux) 7.5.0 Executing: mpicxx -show stdout: g++ -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpicxx -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl Executing: mpicxx --help stdout: Usage: g++ [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by g++. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . Executing: uname -s stdout: Linux Executing: mpicxx -show stdout: g++ -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpicxx -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl Executing: mpicxx --help stdout: Usage: g++ [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by g++. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . =============================================================================== ***** WARNING: Using default C++ optimization flags -g -O You might consider manually setting optimal optimization flags for your system with CXXOPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for examples =============================================================================== Executing: mpif90 --version stdout: GNU Fortran (SUSE Linux) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. getCompilerVersion: mpif90 GNU Fortran (SUSE Linux) 7.5.0 Executing: mpif90 -show stdout: gfortran -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl Executing: mpif90 --help stdout: Usage: gfortran [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and multiple library search directories. -print-multi-os-directory Display the relative path to OS libraries. -print-sysroot Display the target libraries directory. -print-sysroot-headers-suffix Display the sysroot suffix used to find headers. -Wa, Pass comma-separated on to the assembler. -Wp, Pass comma-separated on to the preprocessor. -Wl, Pass comma-separated on to the linker. -Xassembler Pass on to the assembler. -Xpreprocessor Pass on to the preprocessor. -Xlinker Pass on to the linker. -save-temps Do not delete intermediate files. -save-temps= Do not delete intermediate files. -no-canonical-prefixes Do not canonicalize paths when building relative prefixes to other gcc components. -pipe Use pipes rather than intermediate files. -time Time the execution of each subprocess. -specs= Override built-in specs with the contents of . -std= Assume that the input sources are for . --sysroot= Use as the root directory for headers and libraries. -B Add to the compiler's search paths. -v Display the programs invoked by the compiler. -### Like -v but options quoted and commands not executed. -E Preprocess only; do not compile, assemble or link. -S Compile only; do not assemble or link. -c Compile and assemble, but do not link. -o Place the output into . -pie Create a position independent executable. -shared Create a shared library. -x Specify the language of the following input files. Permissible languages include: c c++ assembler none 'none' means revert to the default behavior of guessing the language based on the file's extension. Options starting with -g, -f, -m, -O, -W, or --param are automatically passed on to the various sub-processes invoked by gfortran. In order to pass other options on to these processes the -W options must be used. For bug reporting instructions, please see: . Executing: mpif90 --version stdout: GNU Fortran (SUSE Linux) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Executing: mpif90 --version stdout: GNU Fortran (SUSE Linux) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Executing: mpif90 --version stdout: GNU Fortran (SUSE Linux) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Executing: mpif90 -show stdout: gfortran -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl =============================================================================== ***** WARNING: Using default FORTRAN optimization flags -g -O You might consider manually setting optimal optimization flags for your system with FOPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for examples =============================================================================== ================================================================================ TEST checkRestrict from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:146) TESTING: checkRestrict from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:146) Check for the C/CXX restrict keyword Executing: mpicc -V All intermediate test results are stored in /tmp/petsc-wjcu960y/config.compilers Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.compilers/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.compilers/conftest.c:5:20: warning: unused variable ???x??? [-Wunused-variable] float * __restrict x;; ^ Source: #include "confdefs.h" #include "conffix.h" int main() { float * __restrict x;; return 0; } compilers: Set C restrict keyword to __restrict Defined "C_RESTRICT" to "__restrict" ================================================================================ TEST checkCFormatting from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:399) TESTING: checkCFormatting from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:399) Activate format string checking if using the GNU compilers ================================================================================ TEST checkCInline from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:116) TESTING: checkCInline from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:116) Check for C inline keyword Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" static inline int foo(int a) {return a;} int main() { foo(1);; return 0; } compilers: Set C Inline keyword to inline Defined "C_INLINE" to "inline" ================================================================================ TEST checkDynamicLoadFlag from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:410) TESTING: checkDynamicLoadFlag from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:410) Checks that dlopen() takes RTLD_XXX, and defines PETSC_HAVE_RTLD_XXX if it does Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include char *libname; int main() { dlopen(libname, RTLD_LAZY); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -ldl Defined "HAVE_RTLD_LAZY" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include char *libname; int main() { dlopen(libname, RTLD_NOW); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -ldl Defined "HAVE_RTLD_NOW" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include char *libname; int main() { dlopen(libname, RTLD_LOCAL); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -ldl Defined "HAVE_RTLD_LOCAL" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include char *libname; int main() { dlopen(libname, RTLD_GLOBAL); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -ldl Defined "HAVE_RTLD_GLOBAL" to "1" ================================================================================ TEST checkCLibraries from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:210) TESTING: checkCLibraries from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:210) Determines the libraries needed to link with C compiled code Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include void asub(void) {char s[16];printf("testing %s",s);} Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main print*,'testing' stop end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -ldl C libraries are not needed when using Fortran linker Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include void asub(void) {char s[16];printf("testing %s",s);} Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Executing: mpicxx -o /tmp/petsc-wjcu960y/config.compilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -ldl C libraries are not needed when using C++ linker ================================================================================ TEST checkDependencyGenerationFlag from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1599) TESTING: checkDependencyGenerationFlag from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1599) Check if -MMD works for dependency generation, and add it if it does Trying C compiler flag -MMD -MP Defined make macro "C_DEPFLAGS" to "-MMD -MP" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -MMD -MP /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Trying Cxx compiler flag -MMD -MP Defined make macro "CXX_DEPFLAGS" to "-MMD -MP" Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC -MMD -MP /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Trying FC compiler flag -MMD -MP Defined make macro "FC_DEPFLAGS" to "-MMD -MP" Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -MMD -MP /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end ================================================================================ TEST checkC99Flag from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1645) TESTING: checkC99Flag from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1645) Check for -std=c99 or equivalent flag Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.setCompilers/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.setCompilers/conftest.c:7:11: warning: variable ???x??? set but not used [-Wunused-but-set-variable] float x[2],y; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { float x[2],y; y = FLT_ROUNDS; // c++ comment int j = 2; for (int i=0; i<2; i++){ x[i] = i*j*y; } ; return 0; } Accepted C99 compile flag: Defined "HAVE_C99" to "1" ================================================================================ TEST checkRestrict from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:146) TESTING: checkRestrict from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:146) Check for the C/CXX restrict keyword Executing: mpicc -V Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.compilers/conftest.cc: In function ???int main()???: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:5:20: warning: unused variable ???x??? [-Wunused-variable] float * __restrict x;; ^ Source: #include "confdefs.h" #include "conffix.h" int main() { float * __restrict x;; return 0; } compilers: Set Cxx restrict keyword to __restrict Defined "CXX_RESTRICT" to "__restrict" ================================================================================ TEST checkCxxNamespace from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:450) TESTING: checkCxxNamespace from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:450) Checks that C++ compiler supports namespaces, and if it does defines HAVE_CXX_NAMESPACE Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" namespace petsc {int dummy;} int main() { ; return 0; } Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" template struct a {}; namespace trouble{ template struct a : public ::a {}; } trouble::a uugh; int main() { ; return 0; } compilers: C++ has namespaces Defined "HAVE_CXX_NAMESPACE" to "1" ================================================================================ TEST checkCxxOptionalExtensions from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:423) TESTING: checkCxxOptionalExtensions from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:423) Check whether the C++ compiler (IBM xlC, OSF5) need special flag for .c files which contain C++ Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { class somename { int i; };; return 0; } ================================================================================ TEST checkCxxInline from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:131) TESTING: checkCxxInline from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:131) Check for C++ inline keyword Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" static inline int foo(int a) {return a;} int main() { foo(1);; return 0; } compilers: Set Cxx Inline keyword to inline Defined "CXX_INLINE" to "inline" ================================================================================ TEST checkCxxLibraries from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:508) TESTING: checkCxxLibraries from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:508) Determines the libraries needed to link with C++ Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include void asub(void) {std::vector v; try { throw 20; } catch (int e) { std::cout << "An exception occurred"; }} Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Executing: mpicc -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/confc.o: in function `asub()': /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `__cxa_allocate_exception' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `typeinfo for int' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `__cxa_throw' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `__cxa_begin_catch' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `std::cout' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*)' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `__cxa_end_catch' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.cc:7: undefined reference to `__cxa_end_catch' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/confc.o: in function `_GLOBAL__sub_I_conftest.cc': /usr/include/c++/7/iostream:74: undefined reference to `std::ios_base::Init::Init()' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/confc.o: in function `__static_initialization_and_destruction_0': /usr/include/c++/7/iostream:74: undefined reference to `std::ios_base::Init::~Init()' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/confc.o:(.data.rel.local.DW.ref._ZTIi[DW.ref._ZTIi]+0x0): undefined reference to `typeinfo for int' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/confc.o:(.data.rel.local.DW.ref.__gxx_personality_v0[DW.ref.__gxx_personality_v0]+0x0): undefined reference to `__gxx_personality_v0' collect2: error: ld returned 1 exit status Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include void asub(void) {std::vector v; try { throw 20; } catch (int e) { std::cout << "An exception occurred"; }} Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Executing: mpicc -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -lstdc++ -ldl compilers: C++ requires -lstdc++ to link with C compiler Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include void asub(void) {std::vector v; try { throw 20; } catch (int e) { std::cout << "An exception occurred"; }} Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main print*,'testing' stop end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -lstdc++ -ldl C++ libraries are not needed when using FC linker ================================================================================ TEST checkCxx11 from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:465) TESTING: checkCxx11 from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:465) Determine the option needed to support the C++11 dialect We auto-detect C++11 if the compiler supports it without options, otherwise we require with-cxx-dialect=C++11 to try adding flags to support it. Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc: In function ???int main()???: /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc:13:24: warning: unused variable ???x??? [-Wunused-variable] const double x = dist(mt); ^ Source: #include "confdefs.h" #include "conffix.h" #include template constexpr T Cubed( T x ) { return x*x*x; } int main() { std::random_device rd; std::mt19937 mt(rd()); std::normal_distribution dist(0,1); const double x = dist(mt); ; return 0; } ================================================================================ TEST checkFortranTypeSizes from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:722) TESTING: checkFortranTypeSizes from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:722) Check whether real*8 is supported and suggest flags which will allow support Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2:21: real*8 variable 1 Warning: Unused variable ???variable??? declared at (1) [-Wunused-variable] Source: program main real*8 variable end ================================================================================ TEST checkFortranNameMangling from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:782) TESTING: checkFortranNameMangling from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:782) Checks Fortran name mangling, and defines HAVE_FORTRAN_UNDERSCORE, HAVE_FORTRAN_NOUNDERSCORE, HAVE_FORTRAN_CAPS, or HAVE_FORTRAN_STDCALL Testing Fortran mangling type underscore with code void d1chk_(void){return;} Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" void d1chk_(void){return;} Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main call d1chk() end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -lstdc++ -ldl compilers: Fortran name mangling is underscore Defined "HAVE_FORTRAN_UNDERSCORE" to "1" Executing: mpif90 --version stdout: GNU Fortran (SUSE Linux) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Defined "FORTRAN_CHARLEN_T" to "int" ================================================================================ TEST checkFortranNameManglingDouble from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:823) TESTING: checkFortranNameManglingDouble from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:823) Checks if symbols containing an underscore append an extra underscore, and defines HAVE_FORTRAN_UNDERSCORE_UNDERSCORE if necessary Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" void d1_chk__(void){return;} Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main call d1_chk() end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2: undefined reference to `d1_chk_' collect2: error: ld returned 1 exit status ================================================================================ TEST checkFortranPreprocessor from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:833) TESTING: checkFortranPreprocessor from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:833) Determine if Fortran handles preprocessing properly compilers: Fortran uses CPP preprocessor Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main #define dummy dummy #ifndef dummy fooey #endif end ================================================================================ TEST checkFortranDefineCompilerOption from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:857) TESTING: checkFortranDefineCompilerOption from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:857) Check if -WF,-Dfoobar or -Dfoobar is the compiler option to define a macro Defined make macro "FC_DEFINE_FLAG" to "-D" compilers: Fortran uses -D for defining macro Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -DTesting /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main #define dummy dummy #ifndef Testing fooey #endif end ================================================================================ TEST checkFortranLibraries from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:877) TESTING: checkFortranLibraries from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:877) Substitutes for FLIBS the libraries needed to link with Fortran This macro is intended to be used in those situations when it is necessary to mix, e.g. C++ and Fortran 77, source code into a single program or shared library. For example, if object files from a C++ and Fortran 77 compiler must be linked together, then the C++ compiler/linker must be used for linking (since special C++-ish things need to happen at link time like calling global constructors, instantiating templates, enabling exception support, etc.). However, the Fortran 77 intrinsic and run-time libraries must be linked in as well, but the C++ compiler/linker does not know how to add these Fortran 77 libraries. This code was translated from the autoconf macro which was packaged in its current form by Matthew D. Langston . However, nearly all of this macro came from the OCTAVE_FLIBS macro in octave-2.0.13/aclocal.m4, and full credit should go to John W. Eaton for writing this extremely useful macro. Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main #include call MPI_Allreduce() end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: subroutine asub() print*,'testing' call MPI_Allreduce() return end Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Executing: mpicc -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/confc.o: in function `asub_': /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2: undefined reference to `_gfortran_st_write' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2: undefined reference to `_gfortran_transfer_character_write' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2: undefined reference to `_gfortran_st_write_done' collect2: error: ld returned 1 exit status Fortran code cannot directly be linked with C linker, therefor will determine needed Fortran libraries Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: subroutine asub() print*,'testing' call MPI_Allreduce() return end Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Executing: mpicxx -o /tmp/petsc-wjcu960y/config.compilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/confc.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/confc.o: in function `asub_': /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2: undefined reference to `_gfortran_st_write' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2: undefined reference to `_gfortran_transfer_character_write' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.compilers/conftest.F90:2: undefined reference to `_gfortran_st_write_done' collect2: error: ld returned 1 exit status Fortran code cannot directly be linked with C++ linker, therefor will determine needed Fortran libraries Executing: mpif90 -V Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -v -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl stdout: mpif90 for the Intel(R) MPI Library 2019 Update 5 for Linux* Copyright 2003-2019, Intel Corporation. Possible ERROR while running linker: stdout: mpif90 for the Intel(R) MPI Library 2019 Update 5 for Linux* Copyright 2003-2019, Intel Corporation.stderr: Driving: gfortran -o /tmp/petsc-wjcu960y/config.compilers/conftest -v -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Xlinker -rpath -Xlinker /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -Wl,-z,now -Wl,-z,relro -Wl,-z,noexecstack -Xlinker --enable-new-dtags -ldl -l gfortran -l m -shared-libgcc Using built-in specs. COLLECT_GCC=gfortran COLLECT_LTO_WRAPPER=/usr/lib64/gcc/x86_64-suse-linux/7/lto-wrapper OFFLOAD_TARGET_NAMES=hsa:nvptx-none Target: x86_64-suse-linux Configured with: ../configure --prefix=/usr --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64 --enable-languages=c,c++,objc,fortran,obj-c++,ada,go --enable-offload-targets=hsa,nvptx-none=/usr/nvptx-none, --without-cuda-driver --enable-checking=release --disable-werror --with-gxx-include-dir=/usr/include/c++/7 --enable-ssp --disable-libssp --disable-libvtv --disable-libcc1 --disable-plugin --with-bugurl=https://bugs.opensuse.org/ --with-pkgversion='SUSE Linux' --with-slibdir=/lib64 --with-system-zlib --enable-libstdcxx-allocator=new --disable-libstdcxx-pch --enable-version-specific-runtime-libs --with-gcc-major-version-only --enable-linker-build-id --enable-linux-futex --enable-gnu-indirect-function --program-suffix=-7 --without-system-libunwind --enable-multilib --with-arch-32=x86-64 --with-tune=generic --build=x86_64-suse-linux --host=x86_64-suse-linux Thread model: posix gcc version 7.5.0 (SUSE Linux) Reading specs from /usr/lib64/gcc/x86_64-suse-linux/7/libgfortran.spec rename spec lib to liborig COLLECT_GCC_OPTIONS='-o' '/tmp/petsc-wjcu960y/config.compilers/conftest' '-v' '-fPIC' '-Wall' '-ffree-line-length-0' '-Wno-unused-dummy-argument' '-g' '-O' '-I' '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0' '-I' '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include' '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt' '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib' '-shared-libgcc' '-mtune=generic' '-march=x86-64' COMPILER_PATH=/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/:/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ LIBRARY_PATH=/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7/:/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib/:/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64/:/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/lib/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-o' '/tmp/petsc-wjcu960y/config.compilers/conftest' '-v' '-fPIC' '-Wall' '-ffree-line-length-0' '-Wno-unused-dummy-argument' '-g' '-O' '-I' '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0' '-I' '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include' '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt' '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib' '-shared-libgcc' '-mtune=generic' '-march=x86-64' /usr/lib64/gcc/x86_64-suse-linux/7/collect2 -plugin /usr/lib64/gcc/x86_64-suse-linux/7/liblto_plugin.so -plugin-opt=/usr/lib64/gcc/x86_64-suse-linux/7/lto-wrapper -plugin-opt=-fresolution=/tmp/ccBpEEeu.res -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lquadmath -plugin-opt=-pass-through=-lm -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc --build-id --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o /tmp/petsc-wjcu960y/config.compilers/conftest /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crt1.o /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crti.o /usr/lib64/gcc/x86_64-suse-linux/7/crtbegin.o -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/lib -L/usr/lib64/gcc/x86_64-suse-linux/7/../../.. /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl --enable-new-dtags -rpath /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -rpath /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -z now -z relro -z noexecstack --enable-new-dtags -ldl -lgfortran -lm -lgcc_s -lgcc -lquadmath -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib64/gcc/x86_64-suse-linux/7/crtend.o /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crtn.o COLLECT_GCC_OPTIONS='-o' '/tmp/petsc-wjcu960y/config.compilers/conftest' '-v' '-fPIC' '-Wall' '-ffree-line-length-0' '-Wno-unused-dummy-argument' '-g' '-O' '-I' '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0' '-I' '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include' '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt' '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib' '-shared-libgcc' '-mtune=generic' '-march=x86-64' compilers: Checking arg mpif90 compilers: Unknown arg mpif90 compilers: Checking arg for compilers: Unknown arg for compilers: Checking arg the compilers: Unknown arg the compilers: Checking arg Intel(R) compilers: Unknown arg Intel(R) compilers: Checking arg MPI compilers: Unknown arg MPI compilers: Checking arg Library compilers: Unknown arg Library compilers: Checking arg 2019 compilers: Unknown arg 2019 compilers: Checking arg Update compilers: Unknown arg Update compilers: Checking arg 5 compilers: Unknown arg 5 compilers: Checking arg for compilers: Unknown arg for compilers: Checking arg Linux* compilers: Unknown arg Linux* compilers: Checking arg Copyright compilers: Unknown arg Copyright compilers: Checking arg 2003-2019, compilers: Unknown arg 2003-2019, compilers: Checking arg Intel compilers: Unknown arg Intel compilers: Checking arg Corporation. compilers: Unknown arg Corporation. compilers: Checking arg Driving: compilers: Unknown arg Driving: compilers: Checking arg gfortran compilers: Unknown arg gfortran compilers: Checking arg -o compilers: Unknown arg -o compilers: Checking arg /tmp/petsc-wjcu960y/config.compilers/conftest compilers: Unknown arg /tmp/petsc-wjcu960y/config.compilers/conftest compilers: Checking arg -v compilers: Unknown arg -v compilers: Checking arg -fPIC compilers: Unknown arg -fPIC compilers: Checking arg -Wall compilers: Unknown arg -Wall compilers: Checking arg -ffree-line-length-0 compilers: Unknown arg -ffree-line-length-0 compilers: Checking arg -Wno-unused-dummy-argument compilers: Unknown arg -Wno-unused-dummy-argument compilers: Checking arg -g compilers: Unknown arg -g compilers: Checking arg -O compilers: Unknown arg -O compilers: Checking arg /tmp/petsc-wjcu960y/config.compilers/conftest.o compilers: Unknown arg /tmp/petsc-wjcu960y/config.compilers/conftest.o compilers: Checking arg -lstdc++ compilers: Found library: -lstdc++ compilers: Checking arg -ldl compilers: Found library: -ldl compilers: Checking arg -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 compilers: Found include directory: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 compilers: Checking arg -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include compilers: Found include directory: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib compilers: Checking arg -Xlinker compilers: Unknown arg -Xlinker compilers: Checking arg --enable-new-dtags compilers: Unknown arg --enable-new-dtags compilers: Checking arg -Xlinker compilers: Unknown arg -Xlinker compilers: Checking arg -rpath compilers: Checking arg /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt compilers: Unknown arg /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt compilers: Checking arg -Xlinker compilers: Unknown arg -Xlinker compilers: Checking arg -rpath compilers: Checking arg /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib compilers: Unknown arg /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib compilers: Checking arg -lmpifort compilers: Found library: -lmpifort compilers: Checking arg -lmpi compilers: Found library: -lmpi compilers: Checking arg -lrt compilers: Found library: -lrt compilers: Checking arg -lpthread compilers: Found library: -lpthread compilers: Checking arg -Wl,-z,now compilers: Unknown arg -Wl,-z,now compilers: Checking arg -Wl,-z,relro compilers: Unknown arg -Wl,-z,relro compilers: Checking arg -Wl,-z,noexecstack compilers: Unknown arg -Wl,-z,noexecstack compilers: Checking arg -Xlinker compilers: Unknown arg -Xlinker compilers: Checking arg --enable-new-dtags compilers: Unknown arg --enable-new-dtags compilers: Checking arg -ldl compilers: Already in lflags: -ldl compilers: Checking arg -l compilers: Found canonical library: -lgfortran compilers: Checking arg -l compilers: Found canonical library: -lm compilers: Checking arg -shared-libgcc compilers: Unknown arg -shared-libgcc compilers: Checking arg Using compilers: Unknown arg Using compilers: Checking arg built-in compilers: Unknown arg built-in compilers: Checking arg specs. compilers: Unknown arg specs. compilers: Checking arg COLLECT_GCC=gfortran compilers: Unknown arg COLLECT_GCC=gfortran compilers: Checking arg COLLECT_LTO_WRAPPER=/usr/lib64/gcc/x86_64-suse-linux/7/lto-wrapper compilers: Unknown arg COLLECT_LTO_WRAPPER=/usr/lib64/gcc/x86_64-suse-linux/7/lto-wrapper compilers: Checking arg OFFLOAD_TARGET_NAMES=hsa:nvptx-none compilers: Unknown arg OFFLOAD_TARGET_NAMES=hsa:nvptx-none compilers: Checking arg Target: compilers: Unknown arg Target: compilers: Checking arg x86_64-suse-linux compilers: Unknown arg x86_64-suse-linux compilers: Checking arg Configured compilers: Unknown arg Configured compilers: Checking arg with: compilers: Unknown arg with: compilers: Checking arg ../configure compilers: Unknown arg ../configure compilers: Checking arg --prefix=/usr compilers: Unknown arg --prefix=/usr compilers: Checking arg --infodir=/usr/share/info compilers: Unknown arg --infodir=/usr/share/info compilers: Checking arg --mandir=/usr/share/man compilers: Unknown arg --mandir=/usr/share/man compilers: Checking arg --libdir=/usr/lib64 compilers: Unknown arg --libdir=/usr/lib64 compilers: Checking arg --libexecdir=/usr/lib64 compilers: Unknown arg --libexecdir=/usr/lib64 compilers: Checking arg --enable-languages=c,c++,objc,fortran,obj-c++,ada,go compilers: Unknown arg --enable-languages=c,c++,objc,fortran,obj-c++,ada,go compilers: Checking arg --enable-offload-targets=hsa,nvptx-none=/usr/nvptx-none, compilers: Unknown arg --enable-offload-targets=hsa,nvptx-none=/usr/nvptx-none, compilers: Checking arg --without-cuda-driver compilers: Unknown arg --without-cuda-driver compilers: Checking arg --enable-checking=release compilers: Unknown arg --enable-checking=release compilers: Checking arg --disable-werror compilers: Unknown arg --disable-werror compilers: Checking arg --with-gxx-include-dir=/usr/include/c++/7 compilers: Unknown arg --with-gxx-include-dir=/usr/include/c++/7 compilers: Checking arg --enable-ssp compilers: Unknown arg --enable-ssp compilers: Checking arg --disable-libssp compilers: Unknown arg --disable-libssp compilers: Checking arg --disable-libvtv compilers: Unknown arg --disable-libvtv compilers: Checking arg --disable-libcc1 compilers: Unknown arg --disable-libcc1 compilers: Checking arg --disable-plugin compilers: Unknown arg --disable-plugin compilers: Checking arg --with-bugurl=https://bugs.opensuse.org/ compilers: Unknown arg --with-bugurl=https://bugs.opensuse.org/ compilers: Checking arg --with-pkgversion= compilers: Unknown arg --with-pkgversion= compilers: Checking arg --with-slibdir=/lib64 compilers: Unknown arg --with-slibdir=/lib64 compilers: Checking arg --with-system-zlib compilers: Unknown arg --with-system-zlib compilers: Checking arg --enable-libstdcxx-allocator=new compilers: Unknown arg --enable-libstdcxx-allocator=new compilers: Checking arg --disable-libstdcxx-pch compilers: Unknown arg --disable-libstdcxx-pch compilers: Checking arg --enable-version-specific-runtime-libs compilers: Unknown arg --enable-version-specific-runtime-libs compilers: Checking arg --with-gcc-major-version-only compilers: Unknown arg --with-gcc-major-version-only compilers: Checking arg --enable-linker-build-id compilers: Unknown arg --enable-linker-build-id compilers: Checking arg --enable-linux-futex compilers: Unknown arg --enable-linux-futex compilers: Checking arg --enable-gnu-indirect-function compilers: Unknown arg --enable-gnu-indirect-function compilers: Checking arg --program-suffix=-7 compilers: Unknown arg --program-suffix=-7 compilers: Checking arg --without-system-libunwind compilers: Unknown arg --without-system-libunwind compilers: Checking arg --enable-multilib compilers: Unknown arg --enable-multilib compilers: Checking arg --with-arch-32=x86-64 compilers: Unknown arg --with-arch-32=x86-64 compilers: Checking arg --with-tune=generic compilers: Unknown arg --with-tune=generic compilers: Checking arg --build=x86_64-suse-linux compilers: Unknown arg --build=x86_64-suse-linux compilers: Checking arg --host=x86_64-suse-linux compilers: Unknown arg --host=x86_64-suse-linux compilers: Checking arg Thread compilers: Unknown arg Thread compilers: Checking arg model: compilers: Unknown arg model: compilers: Checking arg posix compilers: Unknown arg posix compilers: Checking arg gcc compilers: Unknown arg gcc compilers: Checking arg version compilers: Unknown arg version compilers: Checking arg 7.5.0 compilers: Unknown arg 7.5.0 compilers: Checking arg (SUSE compilers: Unknown arg (SUSE compilers: Checking arg Linux) compilers: Unknown arg Linux) compilers: Checking arg Reading compilers: Unknown arg Reading compilers: Checking arg specs compilers: Unknown arg specs compilers: Checking arg from compilers: Unknown arg from compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/libgfortran.spec compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/libgfortran.spec compilers: Checking arg rename compilers: Unknown arg rename compilers: Checking arg spec compilers: Unknown arg spec compilers: Checking arg lib compilers: Unknown arg lib compilers: Checking arg to compilers: Unknown arg to compilers: Checking arg liborig compilers: Unknown arg liborig compilers: Checking arg COLLECT_GCC_OPTIONS= compilers: Unknown arg COLLECT_GCC_OPTIONS= compilers: Checking arg COMPILER_PATH=/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/:/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ compilers: Skipping arg COMPILER_PATH=/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/:/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ compilers: Checking arg LIBRARY_PATH=/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7/:/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib/:/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64/:/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/lib/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../:/lib/:/usr/lib/ compilers: Skipping arg LIBRARY_PATH=/usr/lib64/gcc/x86_64-suse-linux/7/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7/:/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib/:/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64/:/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin/:/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/lib/:/usr/lib64/gcc/x86_64-suse-linux/7/../../../:/lib/:/usr/lib/ compilers: Checking arg COLLECT_GCC_OPTIONS= compilers: Unknown arg COLLECT_GCC_OPTIONS= compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/collect2 compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/collect2 compilers: Checking arg -plugin compilers: Unknown arg -plugin compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/liblto_plugin.so compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/liblto_plugin.so compilers: Checking arg -plugin-opt=/usr/lib64/gcc/x86_64-suse-linux/7/lto-wrapper compilers: Unknown arg -plugin-opt=/usr/lib64/gcc/x86_64-suse-linux/7/lto-wrapper compilers: Checking arg -plugin-opt=-fresolution=/tmp/ccBpEEeu.res compilers: Unknown arg -plugin-opt=-fresolution=/tmp/ccBpEEeu.res compilers: Checking arg -plugin-opt=-pass-through=-lgcc_s compilers: Unknown arg -plugin-opt=-pass-through=-lgcc_s compilers: Checking arg -plugin-opt=-pass-through=-lgcc compilers: Unknown arg -plugin-opt=-pass-through=-lgcc compilers: Checking arg -plugin-opt=-pass-through=-lquadmath compilers: Unknown arg -plugin-opt=-pass-through=-lquadmath compilers: Checking arg -plugin-opt=-pass-through=-lm compilers: Unknown arg -plugin-opt=-pass-through=-lm compilers: Checking arg -plugin-opt=-pass-through=-lgcc_s compilers: Unknown arg -plugin-opt=-pass-through=-lgcc_s compilers: Checking arg -plugin-opt=-pass-through=-lgcc compilers: Unknown arg -plugin-opt=-pass-through=-lgcc compilers: Checking arg -plugin-opt=-pass-through=-lc compilers: Unknown arg -plugin-opt=-pass-through=-lc compilers: Checking arg -plugin-opt=-pass-through=-lgcc_s compilers: Unknown arg -plugin-opt=-pass-through=-lgcc_s compilers: Checking arg -plugin-opt=-pass-through=-lgcc compilers: Unknown arg -plugin-opt=-pass-through=-lgcc compilers: Checking arg --build-id compilers: Unknown arg --build-id compilers: Checking arg --eh-frame-hdr compilers: Unknown arg --eh-frame-hdr compilers: Checking arg -m compilers: Unknown arg -m compilers: Checking arg elf_x86_64 compilers: Unknown arg elf_x86_64 compilers: Checking arg -dynamic-linker compilers: Unknown arg -dynamic-linker compilers: Checking arg /lib64/ld-linux-x86-64.so.2 compilers: Unknown arg /lib64/ld-linux-x86-64.so.2 compilers: Checking arg -o compilers: Unknown arg -o compilers: Checking arg /tmp/petsc-wjcu960y/config.compilers/conftest compilers: Unknown arg /tmp/petsc-wjcu960y/config.compilers/conftest compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crt1.o compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crt1.o compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crti.o compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crti.o compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/crtbegin.o compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/crtbegin.o compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt compilers: Already in lflags so skipping: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib compilers: Already in lflags so skipping: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib compilers: Checking arg -L/usr/lib64/gcc/x86_64-suse-linux/7 compilers: Found library directory: -L/usr/lib64/gcc/x86_64-suse-linux/7 compilers: Checking arg -L/usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64 compilers: Checking arg -L/lib/../lib64 compilers: Checking arg -L/usr/lib/../lib64 compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin compilers: Checking arg -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 compilers: Found library directory: -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 compilers: Checking arg -L/usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/lib compilers: Found library directory: -L/usr/x86_64-suse-linux/lib compilers: Checking arg -L/usr/lib64/gcc/x86_64-suse-linux/7/../../.. compilers: Checking arg /tmp/petsc-wjcu960y/config.compilers/conftest.o compilers: Unknown arg /tmp/petsc-wjcu960y/config.compilers/conftest.o compilers: Checking arg -lstdc++ compilers: Already in lflags: -lstdc++ compilers: Checking arg -ldl compilers: Already in lflags: -ldl compilers: Checking arg --enable-new-dtags compilers: Unknown arg --enable-new-dtags compilers: Checking arg -rpath compilers: Found -rpath library: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt compilers: Checking arg -rpath compilers: Found -rpath library: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib compilers: Checking arg -lmpifort compilers: Already in lflags: -lmpifort compilers: Checking arg -lmpi compilers: Already in lflags: -lmpi compilers: Checking arg -lrt compilers: Already in lflags: -lrt compilers: Checking arg -lpthread compilers: Already in lflags: -lpthread compilers: Checking arg -z compilers: Unknown arg -z compilers: Checking arg now compilers: Unknown arg now compilers: Checking arg -z compilers: Unknown arg -z compilers: Checking arg relro compilers: Unknown arg relro compilers: Checking arg -z compilers: Unknown arg -z compilers: Checking arg noexecstack compilers: Unknown arg noexecstack compilers: Checking arg --enable-new-dtags compilers: Unknown arg --enable-new-dtags compilers: Checking arg -ldl compilers: Already in lflags: -ldl compilers: Checking arg -lgfortran compilers: Found library: -lgfortran compilers: Checking arg -lm compilers: Found library: -lm compilers: Checking arg -lgcc_s compilers: Found library: -lgcc_s compilers: Checking arg -lgcc compilers: Found system library therefor skipping: -lgcc compilers: Checking arg -lquadmath compilers: Found library: -lquadmath compilers: Checking arg -lm compilers: Already in lflags: -lm compilers: Checking arg -lgcc_s compilers: Already in lflags: -lgcc_s compilers: Checking arg -lgcc compilers: Found system library therefor skipping: -lgcc compilers: Checking arg -lc compilers: Found system library therefor skipping: -lc compilers: Checking arg -lgcc_s compilers: Already in lflags: -lgcc_s compilers: Checking arg -lgcc compilers: Found system library therefor skipping: -lgcc compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/crtend.o compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/crtend.o compilers: Checking arg /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crtn.o compilers: Unknown arg /usr/lib64/gcc/x86_64-suse-linux/7/../../../../lib64/crtn.o compilers: Checking arg COLLECT_GCC_OPTIONS= compilers: Unknown arg COLLECT_GCC_OPTIONS= compilers: Libraries needed to link Fortran code with the C linker: ['-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] compilers: Libraries needed to link Fortran main with the C linker: [] compilers: Check that Fortran libraries can be used from C Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.setCompilers/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest compilers: Check that Fortran libraries can be used from C++ compilers: Fortran libraries can be used from C++ Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.setCompilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.setCompilers/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest Executing: /tmp/petsc-wjcu960y/config.setCompilers/conftest ================================================================================ TEST checkFortranLinkingCxx from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1273) TESTING: checkFortranLinkingCxx from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1273) Check that Fortran can be linked against C++ Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" extern "C" void d1chk_(void); void foo(void){d1chk_();} Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" extern "C" void d1chk_(void); void d1chk_(void){return;} Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main call d1chk() end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/cxxobj.o /tmp/petsc-wjcu960y/config.compilers/confc.o -lstdc++ -ldl compilers: Fortran can link C++ functions ================================================================================ TEST checkFortran90 from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1339) TESTING: checkFortran90 from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1339) Determine whether the Fortran compiler handles F90 Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main INTEGER, PARAMETER :: int = SELECTED_INT_KIND(8) INTEGER (KIND=int) :: ierr ierr = 1 end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl Defined "USING_F90" to "1" Fortran compiler supports F90 ================================================================================ TEST checkFortran90FreeForm from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1352) TESTING: checkFortran90FreeForm from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1352) Determine whether the Fortran compiler handles F90FreeForm We also require that the compiler handles lines longer than 132 characters Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main INTEGER, PARAMETER :: int = SELECTED_INT_KIND(8); INTEGER (KIND=int) :: ierr; ierr = 1 end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl Defined "USING_F90FREEFORM" to "1" Fortran compiler supports F90FreeForm ================================================================================ TEST checkFortran2003 from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1366) TESTING: checkFortran2003 from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1366) Determine whether the Fortran compiler handles F2003 Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: module Base_module type, public :: base_type integer :: A contains procedure, public :: Print => BasePrint end type base_type contains subroutine BasePrint(this) class(base_type) :: this end subroutine BasePrint end module Base_module program main use,intrinsic :: iso_c_binding Type(C_Ptr),Dimension(:),Pointer :: CArray character(kind=c_char),pointer :: nullc => null() character(kind=c_char,len=5),dimension(:),pointer::list1 allocate(list1(5)) CArray = (/(c_loc(list1(i)),i=1,5),c_loc(nullc)/) end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl Defined "USING_F2003" to "1" Fortran compiler supports F2003 ================================================================================ TEST checkFortran90Array from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1401) TESTING: checkFortran90Array from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1401) Check for F90 array interfaces Executing: uname -s stdout: Linux Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include void f90arraytest_(void* a1, void* a2,void* a3, void* i) { printf("arrays [%p %p %p]\n",a1,a2,a3); fflush(stdout); return; } void f90ptrtest_(void* a1, void* a2,void* a3, void* i, void* p1 ,void* p2, void* p3) { printf("arrays [%p %p %p]\n",a1,a2,a3); if ((p1 == p3) && (p1 != p2)) { printf("pointers match! [%p %p] [%p]\n",p1,p3,p2); fflush(stdout); } else { printf("pointers do not match! [%p %p] [%p]\n",p1,p3,p2); fflush(stdout); exit(111); } return; } Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main Interface Subroutine f90ptrtest(p1,p2,p3,i) integer, pointer :: p1(:,:) integer, pointer :: p2(:,:) integer, pointer :: p3(:,:) integer i End Subroutine End Interface integer, pointer :: ptr1(:,:),ptr2(:,:) integer, target :: array(6:8,9:21) integer in in = 25 ptr1 => array ptr2 => array call f90arraytest(ptr1,ptr2,ptr1,in) call f90ptrtest(ptr1,ptr2,ptr1,in) end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/fooobj.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.compilers/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.compilers/conftest Executing: /tmp/petsc-wjcu960y/config.compilers/conftest stdout: arrays [0x7ffd664e06f0 0x7ffd664e06f0 0x7ffd664e06f0] arrays [0x7ffd664e06a0 0x7ffd664e0650 0x7ffd664e06a0] pointers do not match! [0x14ee72e20400 0x7ffd664e06f0] [0x14ee759542c0] ERROR while running executable: Could not execute "['/tmp/petsc-wjcu960y/config.compilers/conftest']": arrays [0x7ffd664e06f0 0x7ffd664e06f0 0x7ffd664e06f0] arrays [0x7ffd664e06a0 0x7ffd664e0650 0x7ffd664e06a0] pointers do not match! [0x14ee72e20400 0x7ffd664e06f0] [0x14ee759542c0] compilers: F90 uses a single argument for array pointers ================================================================================ TEST checkFortranModuleInclude from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1488) TESTING: checkFortranModuleInclude from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1488) Figures out what flag is used to specify the include path for Fortran modules Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: module configtest integer testint parameter (testint = 42) end module configtest Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main use configtest write(*,*) testint end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -I/tmp/petsc-wjcu960y/config.compilers/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o /tmp/petsc-wjcu960y/config.compilers/configtest.o -lstdc++ -ldl compilers: Fortran module include flag -I found ================================================================================ TEST checkFortranModuleOutput from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1554) TESTING: checkFortranModuleOutput from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1554) Figures out what flag is used to specify the include path for Fortran modules Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -module /tmp/petsc-wjcu960y/config.compilers/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Possible ERROR while running compiler: exit code 1 stderr: gfortran: error: unrecognized command line option ???-module???; did you mean ???-mhle???? Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -module compile failed Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -module:/tmp/petsc-wjcu960y/config.compilers/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Possible ERROR while running compiler: exit code 1 stderr: gfortran: error: unrecognized command line option ???-module:/tmp/petsc-wjcu960y/config.compilers/confdir??? Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -module: compile failed Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fmod=/tmp/petsc-wjcu960y/config.compilers/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Possible ERROR while running compiler: exit code 1 stderr: gfortran: error: unrecognized command line option ???-fmod=/tmp/petsc-wjcu960y/config.compilers/confdir??? Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -fmod= compile failed Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -J/tmp/petsc-wjcu960y/config.compilers/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -J found ================================================================================ TEST checkFortranTypeStar from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1328) TESTING: checkFortranTypeStar from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1328) Determine whether the Fortran compiler handles type(*) Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main interface subroutine a(b) type(*) :: b(:) end subroutine end interface end Defined "HAVE_FORTRAN_TYPE_STAR" to "1" Fortran compiler supports type(*) ================================================================================ TEST checkFortranTypeInitialize from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1317) TESTING: checkFortranTypeInitialize from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1317) Determines if PETSc objects in Fortran are initialized by default (doesn't work with common blocks) Defined "HAVE_FORTRAN_TYPE_INITIALIZE" to "-2" Defined "FORTRAN_TYPE_INITIALIZE" to " = -2" Initializing Fortran objects ================================================================================ TEST configureFortranFlush from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1308) TESTING: configureFortranFlush from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1308) Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.compilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.F90 Successful compile: Source: program main call flush(6) end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.compilers/conftest.o -lstdc++ -ldl Defined "HAVE_FORTRAN_FLUSH" to "1" ================================================================================ TEST setupFrameworkCompilers from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1721) TESTING: setupFrameworkCompilers from config.compilers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/compilers.py:1721) ================================================================================ TEST configureClosure from config.utilities.closure(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/closure.py:17) TESTING: configureClosure from config.utilities.closure(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/closure.py:17) Determine if Apple ^close syntax is supported in C All intermediate test results are stored in /tmp/petsc-wjcu960y/config.utilities.closure Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.closure/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.closure/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.utilities.closure/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.utilities.closure/conftest.c:6:6: error: expected identifier or ???(??? before ???^??? token int (^closure)(int);; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int (^closure)(int);; return 0; } Compile failed inside link ================================================================================ TEST configureFortranCPP from PETSc.options.fortranCPP(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/fortranCPP.py:20) TESTING: configureFortranCPP from PETSc.options.fortranCPP(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/fortranCPP.py:20) Handle case where Fortran cannot preprocess properly Defined make rule ".f.o .f90.o .f95.o" with dependencies "" and code ['${PETSC_MAKE_STOP_ON_ERROR}${FC} -c ${FC_FLAGS} ${FFLAGS} -o $@ $<'] Defined make rule ".f.a" with dependencies "" and code ['${PETSC_MAKE_STOP_ON_ERROR}${FC} -c ${FC_FLAGS} ${FFLAGS} $<', '-${AR} ${AR_FLAGS} ${LIBNAME} $*.o', '-${RM} $*.o'] Defined make rule ".F.o .F90.o .F95.o" with dependencies "" and code ['${PETSC_MAKE_STOP_ON_ERROR}${FC} -c ${FC_FLAGS} ${FFLAGS} ${FCPPFLAGS} -o $@ $<'] Defined make rule ".F.a" with dependencies "" and code ['${PETSC_MAKE_STOP_ON_ERROR}${FC} -c ${FC_FLAGS} ${FFLAGS} ${FCPPFLAGS} $<', '-${AR} ${AR_FLAGS} ${LIBNAME} $*.o', '-${RM} $*.o'] ================================================================================ TEST configureCLanguage from PETSc.options.languages(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/languages.py:27) TESTING: configureCLanguage from PETSc.options.languages(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/languages.py:27) Choose whether to compile the PETSc library using a C or C++ compiler C language is C Defined "CLANGUAGE_C" to "1" ================================================================================ TEST checkStdC from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:105) TESTING: checkStdC from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:105) Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.headers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include int main() { ; return 0; } Source: #include "confdefs.h" #include "conffix.h" #include Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Source: #include "confdefs.h" #include "conffix.h" #include Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.headers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #define ISLOWER(c) ('a' <= (c) && (c) <= 'z') #define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main() { int i; for(i = 0; i < 256; i++) if (XOR(islower(i), ISLOWER(i)) || toupper(i) != TOUPPER(i)) exit(2); exit(0); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.headers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.headers/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.headers/conftest Executing: /tmp/petsc-wjcu960y/config.headers/conftest Defined "STDC_HEADERS" to "1" ================================================================================ TEST checkStat from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:138) TESTING: checkStat from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:138) Checks whether stat file-mode macros are broken, and defines STAT_MACROS_BROKEN if they are Source: #include "confdefs.h" #include "conffix.h" #include #include #if defined(S_ISBLK) && defined(S_IFDIR) # if S_ISBLK (S_IFDIR) You lose. # endif #endif #if defined(S_ISBLK) && defined(S_IFCHR) # if S_ISBLK (S_IFCHR) You lose. # endif #endif #if defined(S_ISLNK) && defined(S_IFREG) # if S_ISLNK (S_IFREG) You lose. # endif #endif #if defined(S_ISSOCK) && defined(S_IFREG) # if S_ISSOCK (S_IFREG) You lose. # endif #endif Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c ================================================================================ TEST checkSysWait from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:173) TESTING: checkSysWait from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:173) Check for POSIX.1 compatible sys/wait.h, and defines HAVE_SYS_WAIT_H if found Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.headers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #ifndef WEXITSTATUS #define WEXITSTATUS(stat_val) ((unsigned)(stat_val) >> 8) #endif #ifndef WIFEXITED #define WIFEXITED(stat_val) (((stat_val) & 255) == 0) #endif int main() { int s; wait (&s); s = WIFEXITED (s) ? WEXITSTATUS (s) : 1; ; return 0; } Defined "HAVE_SYS_WAIT_H" to "1" ================================================================================ TEST checkTime from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:195) TESTING: checkTime from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:195) Checks if you can safely include both and , and if so defines TIME_WITH_SYS_TIME Checking for header: time.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_TIME_H" to "1" Checking for header: sys/time.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TIME_H" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.headers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include int main() { struct tm *tp = 0; if (tp); ; return 0; } Defined "TIME_WITH_SYS_TIME" to "1" ================================================================================ TEST checkMath from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:203) TESTING: checkMath from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:203) Checks for the math headers and defines Checking for header: math.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_MATH_H" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.headers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { double pi = M_PI; if (pi); ; return 0; } Found math #defines, like M_PI Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.headers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { double f = INFINITY; if (f); ; return 0; } Defined "HAVE_MATH_INFINITY" to "1" Found math INFINITY ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: setjmp.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SETJMP_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: dos.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: endian.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_ENDIAN_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: fcntl.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FCNTL_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: io.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: limits.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_LIMITS_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: malloc.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_MALLOC_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pwd.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PWD_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: search.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SEARCH_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: strings.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STRINGS_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: unistd.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_UNISTD_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/sysinfo.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SYSINFO_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: machine/endian.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/param.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PARAM_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/procfs.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PROCFS_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/resource.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_RESOURCE_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/systeminfo.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/times.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TIMES_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/utsname.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_UTSNAME_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: string.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STRING_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: stdlib.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STDLIB_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/socket.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SOCKET_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/wait.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_WAIT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netinet/in.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETINET_IN_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netdb.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETDB_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Direct.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: time.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_TIME_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Ws2tcpip.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/types.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TYPES_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: WindowsX.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: cxxabi.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: cxxabi.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: cxxabi.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: cxxabi.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: ieeefp.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: stdint.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STDINT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sched.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SCHED_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pthread.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PTHREAD_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: inttypes.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_INTTYPES_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: immintrin.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_IMMINTRIN_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: zmmintrin.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: setjmp.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SETJMP_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: dos.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: endian.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_ENDIAN_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: fcntl.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FCNTL_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: io.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: limits.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_LIMITS_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: malloc.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_MALLOC_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pwd.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PWD_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: search.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SEARCH_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: strings.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STRINGS_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: unistd.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_UNISTD_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/sysinfo.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SYSINFO_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: machine/endian.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/param.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PARAM_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/procfs.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PROCFS_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/resource.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_RESOURCE_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/systeminfo.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/times.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TIMES_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/utsname.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_UTSNAME_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: string.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STRING_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: stdlib.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STDLIB_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/socket.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SOCKET_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/wait.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_WAIT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netinet/in.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETINET_IN_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netdb.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETDB_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Direct.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: time.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_TIME_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Ws2tcpip.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/types.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TYPES_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: WindowsX.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: cxxabi.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: cxxabi.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: cxxabi.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: cxxabi.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: ieeefp.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: stdint.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STDINT_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sched.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SCHED_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pthread.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PTHREAD_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: inttypes.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_INTTYPES_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: immintrin.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_IMMINTRIN_H" to "1" ================================================================================ TEST check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: zmmintrin.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST checkRecursiveMacros from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:223) TESTING: checkRecursiveMacros from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:223) Checks that the preprocessor allows recursive macros, and if not defines HAVE_BROKEN_RECURSIVE_MACRO Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.headers/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" void a(int i, int j) {} #define a(b) a(b,__LINE__) int main() { a(0); ; return 0; } ================================================================================ TEST configureCacheDetails from config.utilities.cacheDetails(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/cacheDetails.py:77) TESTING: configureCacheDetails from config.utilities.cacheDetails(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/cacheDetails.py:77) Try to determine the size and associativity of the cache. All intermediate test results are stored in /tmp/petsc-wjcu960y/config.utilities.cacheDetails Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include long getconf_LEVEL1_DCACHE_SIZE() { long val = sysconf(_SC_LEVEL1_DCACHE_SIZE); return (16 <= val && val <= 2147483647) ? val : 32768; } int main() { ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include long getconf_LEVEL1_DCACHE_SIZE() { long val = sysconf(_SC_LEVEL1_DCACHE_SIZE); return (16 <= val && val <= 2147483647) ? val : 32768; } int main() { FILE *output = fopen("conftestval","w"); if (!output) return 1; fprintf(output,"%ld",getconf_LEVEL1_DCACHE_SIZE()); fclose(output);; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest Executing: /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest Defined "LEVEL1_DCACHE_SIZE" to "32768" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include long getconf_LEVEL1_DCACHE_LINESIZE() { long val = sysconf(_SC_LEVEL1_DCACHE_LINESIZE); return (16 <= val && val <= 2147483647) ? val : 32; } int main() { FILE *output = fopen("conftestval","w"); if (!output) return 1; fprintf(output,"%ld",getconf_LEVEL1_DCACHE_LINESIZE()); fclose(output);; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest Executing: /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest Defined "LEVEL1_DCACHE_LINESIZE" to "64" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include long getconf_LEVEL1_DCACHE_ASSOC() { long val = sysconf(_SC_LEVEL1_DCACHE_ASSOC); return (0 <= val && val <= 2147483647) ? val : 2; } int main() { FILE *output = fopen("conftestval","w"); if (!output) return 1; fprintf(output,"%ld",getconf_LEVEL1_DCACHE_ASSOC()); fclose(output);; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest Executing: /tmp/petsc-wjcu960y/config.utilities.cacheDetails/conftest Defined "LEVEL1_DCACHE_ASSOC" to "8" ================================================================================ TEST check_struct_sigaction from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:46) TESTING: check_struct_sigaction from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:46) Checks if "struct sigaction" exists in signal.h. This check is for C89 check. Checking for type: struct sigaction All intermediate test results are stored in /tmp/petsc-wjcu960y/config.types Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:13:18: warning: unused variable ???a??? [-Wunused-variable] struct sigaction a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { struct sigaction a;; return 0; } struct sigaction found Defined "HAVE_STRUCT_SIGACTION" to "1" ================================================================================ TEST check__int64 from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:52) TESTING: check__int64 from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:52) Checks if __int64 exists. This is primarily for windows. Checking for type: __int64 Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:13:1: error: unknown type name ???__int64???; did you mean ???__int64_t???? __int64 a;; ^~~~~~~ __int64_t /tmp/petsc-wjcu960y/config.types/conftest.c:13:9: warning: unused variable ???a??? [-Wunused-variable] __int64 a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #endif int main() { __int64 a;; return 0; } __int64 found ================================================================================ TEST checkSizeTypes from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:58) TESTING: checkSizeTypes from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:58) Checks for types associated with sizes, such as size_t. Checking for type: size_t Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:13:8: warning: unused variable ???a??? [-Wunused-variable] size_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #endif int main() { size_t a;; return 0; } size_t found ================================================================================ TEST checkFileTypes from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:68) TESTING: checkFileTypes from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:68) Checks for types associated with files, such as mode_t, off_t, etc. Checking for type: mode_t Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:13:8: warning: unused variable ???a??? [-Wunused-variable] mode_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #endif int main() { mode_t a;; return 0; } mode_t found Checking for type: off_t Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:13:7: warning: unused variable ???a??? [-Wunused-variable] off_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #endif int main() { off_t a;; return 0; } off_t found ================================================================================ TEST checkIntegerTypes from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:63) TESTING: checkIntegerTypes from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:63) Checks for types associated with integers, such as int32_t. Checking for type: int32_t Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:13:9: warning: unused variable ???a??? [-Wunused-variable] int32_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #endif int main() { int32_t a;; return 0; } int32_t found ================================================================================ TEST checkPID from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:74) TESTING: checkPID from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:74) Checks for pid_t, and defines it if necessary Checking for type: pid_t Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:13:7: warning: unused variable ???a??? [-Wunused-variable] pid_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #endif int main() { pid_t a;; return 0; } pid_t found ================================================================================ TEST checkUID from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:78) TESTING: checkUID from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:78) Checks for uid_t and gid_t, and defines them if necessary Source: #include "confdefs.h" #include "conffix.h" #include Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.types/conftest.c ================================================================================ TEST checkSignal from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:85) TESTING: checkSignal from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:85) Checks the return type of signal() and defines RETSIGTYPE to that type name Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #ifdef signal #undef signal #endif #ifdef __cplusplus extern "C" void (*signal (int, void(*)(int)))(int); #else void (*signal())(); #endif int main() { ; return 0; } Defined "RETSIGTYPE" to "void" ================================================================================ TEST checkC99Complex from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:106) TESTING: checkC99Complex from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:106) Check for complex numbers in in C99 std Note that since PETSc source code uses _Complex we test specifically for that, not complex Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:6:17: warning: variable ???x??? set but not used [-Wunused-but-set-variable] double _Complex x; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double _Complex x; x = I; ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:6:17: warning: variable ???x??? set but not used [-Wunused-but-set-variable] double _Complex x; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double _Complex x; x = I; ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Defined "HAVE_C99_COMPLEX" to "1" ================================================================================ TEST checkCxxComplex from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:117) TESTING: checkCxxComplex from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:117) Check for complex numbers in namespace std Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.types/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { std::complex x; ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.types/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Defined "HAVE_CXX_COMPLEX" to "1" ================================================================================ TEST checkFortranKind from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:138) TESTING: checkFortranKind from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:138) Checks whether selected_int_kind etc work USE_FORTRANKIND Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.types/conftest.F90 Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.F90:4:43: real(kind=selected_real_kind(10)) d 1 Warning: Unused variable ???d??? declared at (1) [-Wunused-variable] /tmp/petsc-wjcu960y/config.types/conftest.F90:3:45: integer(kind=selected_int_kind(10)) i 1 Warning: Unused variable ???i??? declared at (1) [-Wunused-variable] Source: program main integer(kind=selected_int_kind(10)) i real(kind=selected_real_kind(10)) d end Defined "USE_FORTRANKIND" to "1" ================================================================================ TEST checkConst from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:150) TESTING: checkConst from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:150) Checks for working const, and if not found defines it to empty string Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:25:5: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] if (x[0]); ^~ /tmp/petsc-wjcu960y/config.types/conftest.c:26:5: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? { /* SCO 3.2v4 cc rejects this. */ ^ /tmp/petsc-wjcu960y/config.types/conftest.c:25:10: warning: ???x[0]??? is used uninitialized in this function [-Wuninitialized] if (x[0]); ~^~~ /tmp/petsc-wjcu960y/config.types/conftest.c:30:9: warning: ???t??? is used uninitialized in this function [-Wuninitialized] *t++ = 0; ~^~ /tmp/petsc-wjcu960y/config.types/conftest.c:46:25: warning: ???b??? is used uninitialized in this function [-Wuninitialized] struct s *b; b->j = 5; ~~~~~^~~ Source: #include "confdefs.h" #include "conffix.h" int main() { /* Ultrix mips cc rejects this. */ typedef int charset[2]; const charset x; /* SunOS 4.1.1 cc rejects this. */ char const *const *ccp; char **p; /* NEC SVR4.0.2 mips cc rejects this. */ struct point {int x, y;}; static struct point const zero = {0,0}; /* AIX XL C 1.02.0.0 rejects this. It does not let you subtract one const X* pointer from another in an arm of an if-expression whose if-part is not a constant expression */ const char *g = "string"; ccp = &g + (g ? g-g : 0); /* HPUX 7.0 cc rejects these. */ ++ccp; p = (char**) ccp; ccp = (char const *const *) p; /* This section avoids unused variable warnings */ if (zero.x); if (x[0]); { /* SCO 3.2v4 cc rejects this. */ char *t; char const *s = 0 ? (char *) 0 : (char const *) 0; *t++ = 0; if (*s); } { /* Someone thinks the Sun supposedly-ANSI compiler will reject this. */ int x[] = {25, 17}; const int *foo = &x[0]; ++foo; } { /* Sun SC1.0 ANSI compiler rejects this -- but not the above. */ typedef const int *iptr; iptr p = 0; ++p; } { /* AIX XL C 1.02.0.0 rejects this saying "k.c", line 2.27: 1506-025 (S) Operand must be a modifiable lvalue. */ struct s { int j; const int *ap[3]; }; struct s *b; b->j = 5; } { /* ULTRIX-32 V3.1 (Rev 9) vcc rejects this */ const int foo = 10; /* Get rid of unused variable warning */ if (foo); } ; return 0; } ================================================================================ TEST checkEndian from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:206) TESTING: checkEndian from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:206) If the machine is big endian, defines WORDS_BIGENDIAN Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #ifdef HAVE_SYS_PARAM_H #include #endif int main() { #if !BYTE_ORDER || !BIG_ENDIAN || !LITTLE_ENDIAN bogus endian macros #endif ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.types/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.types/conftest.c:11:3: error: unknown type name ???not???; did you mean ???ino_t???? not big endian ^~~ ino_t /tmp/petsc-wjcu960y/config.types/conftest.c:11:11: error: expected ???=???, ???,???, ???;???, ???asm??? or ???__attribute__??? before ???endian??? not big endian ^~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include #ifdef HAVE_SYS_PARAM_H #include #endif int main() { #if BYTE_ORDER != BIG_ENDIAN not big endian #endif ; return 0; } ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: char Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(char)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_CHAR" to "1" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: void * Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(void *)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_VOID_P" to "8" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: short Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(short)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_SHORT" to "2" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: int Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(int)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_INT" to "4" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: long Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(long)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_LONG" to "8" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: long long Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(long long)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_LONG_LONG" to "8" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: float Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(float)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_FLOAT" to "4" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: double Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(double)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_DOUBLE" to "8" ================================================================================ TEST checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) TESTING: checkSizeof from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:259) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: size_t Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(size_t)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_SIZE_T" to "8" ================================================================================ TEST checkBitsPerByte from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:310) TESTING: checkBitsPerByte from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:310) Determine the nubmer of bits per byte and define BITS_PER_BYTE Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if STDC_HEADERS #include #include #endif int main() { FILE *f = fopen("conftestval", "w"); char val[2]; int i = 0; if (!f) exit(1); val[0]='\1'; val[1]='\0'; while(val[0]) {val[0] <<= 1; i++;} fprintf(f, "%d\n", i); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "BITS_PER_BYTE" to "8" ================================================================================ TEST checkVisibility from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:356) TESTING: checkVisibility from config.types(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/types.py:356) Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __attribute__((visibility ("default"))) int foo(void);; return 0; } Defined "USE_VISIBILITY_C" to "1" Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.types/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __attribute__((visibility ("default"))) int foo(void);; return 0; } Defined "USE_VISIBILITY_CXX" to "1" ================================================================================ TEST configureMemAlign from PETSc.options.memAlign(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/memAlign.py:29) TESTING: configureMemAlign from PETSc.options.memAlign(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/memAlign.py:29) Choose alignment Defined "MEMALIGN" to "16" Memory alignment is 16 ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [socket] in library ['socket', 'nsl'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char socket(); static void _check_socket() { socket(); } int main() { _check_socket();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lsocket -lnsl -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lsocket /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lnsl collect2: error: ld returned 1 exit status ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [handle_sigfpes] in library ['fpe'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char handle_sigfpes(); static void _check_handle_sigfpes() { handle_sigfpes(); } int main() { _check_handle_sigfpes();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lfpe -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lfpe collect2: error: ld returned 1 exit status ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [socket] in library ['socket', 'nsl'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char socket(); static void _check_socket() { socket(); } int main() { _check_socket();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lsocket -lnsl -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lsocket /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lnsl collect2: error: ld returned 1 exit status ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [handle_sigfpes] in library ['fpe'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char handle_sigfpes(); static void _check_handle_sigfpes() { handle_sigfpes(); } int main() { _check_handle_sigfpes();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lfpe -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: cannot find -lfpe collect2: error: ld returned 1 exit status ================================================================================ TEST checkMath from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:261) TESTING: checkMath from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:261) Check for sin() in libm, the math library Checking for functions [sin floor log10 pow] in library [''] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include double sin(double); static void _check_sin() { double x,y; scanf("%lf",&x); y = sin(x); printf("%f",y); ; } #include double floor(double); static void _check_floor() { double x,y; scanf("%lf",&x); y = floor(x); printf("%f",y); ; } #include double log10(double); static void _check_log10() { double x,y; scanf("%lf",&x); y = log10(x); printf("%f",y); ; } #include double pow(double, double); static void _check_pow() { double x,y; scanf("%lf",&x); y = pow(x,x); printf("%f",y); ; } int main() { _check_sin(); _check_floor(); _check_log10(); _check_pow();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: undefined reference to symbol 'log10@@GLIBC_2.2.5' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /lib64/libm.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Checking for functions [sin floor log10 pow] in library ['m'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include double sin(double); static void _check_sin() { double x,y; scanf("%lf",&x); y = sin(x); printf("%f",y); ; } #include double floor(double); static void _check_floor() { double x,y; scanf("%lf",&x); y = floor(x); printf("%f",y); ; } #include double log10(double); static void _check_log10() { double x,y; scanf("%lf",&x); y = log10(x); printf("%f",y); ; } #include double pow(double, double); static void _check_pow() { double x,y; scanf("%lf",&x); y = pow(x,x); printf("%f",y); ; } int main() { _check_sin(); _check_floor(); _check_log10(); _check_pow();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" CheckMath: using math library ['libm.a'] ================================================================================ TEST checkMathErf from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:280) TESTING: checkMathErf from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:280) Check for erf() in libm, the math library Checking for functions [erf] in library ['libm.a'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.libraries/conftest.c: In function ???_check_erf???: /tmp/petsc-wjcu960y/config.libraries/conftest.c:5:74: warning: variable ???y??? set but not used [-Wunused-but-set-variable] static void _check_erf() { double (*checkErf)(double) = erf;double x = 0,y; y = (*checkErf)(x); } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_erf() { double (*checkErf)(double) = erf;double x = 0,y; y = (*checkErf)(x); } int main() { _check_erf();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" erf() found Defined "HAVE_ERF" to "1" ================================================================================ TEST checkMathTgamma from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:289) TESTING: checkMathTgamma from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:289) Check for tgamma() in libm, the math library Checking for functions [tgamma] in library ['libm.a'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.libraries/conftest.c: In function ???_check_tgamma???: /tmp/petsc-wjcu960y/config.libraries/conftest.c:5:83: warning: variable ???y??? set but not used [-Wunused-but-set-variable] static void _check_tgamma() { double (*checkTgamma)(double) = tgamma;double x = 0,y; y = (*checkTgamma)(x); } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_tgamma() { double (*checkTgamma)(double) = tgamma;double x = 0,y; y = (*checkTgamma)(x); } int main() { _check_tgamma();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" tgamma() found Defined "HAVE_TGAMMA" to "1" ================================================================================ TEST checkMathFenv from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:298) TESTING: checkMathFenv from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:298) Checks if can be used with FE_DFL_ENV Checking for functions [fesetenv] in library ['libm.a'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_fesetenv() { fesetenv(FE_DFL_ENV);; } int main() { _check_fesetenv();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" Defined "HAVE_FENV_H" to "1" ================================================================================ TEST checkMathLog2 from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:306) TESTING: checkMathLog2 from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:306) Check for log2() in libm, the math library Checking for functions [log2] in library ['libm.a'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.libraries/conftest.c: In function ???_check_log2???: /tmp/petsc-wjcu960y/config.libraries/conftest.c:5:81: warning: unused variable ???y??? [-Wunused-variable] static void _check_log2() { double (*checkLog2)(double) = log2; double x = 2.5, y = (*checkLog2)(x); } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_log2() { double (*checkLog2)(double) = log2; double x = 2.5, y = (*checkLog2)(x); } int main() { _check_log2();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" log2() found Defined "HAVE_LOG2" to "1" ================================================================================ TEST checkRealtime from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:315) TESTING: checkRealtime from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:315) Check for presence of clock_gettime() in realtime library (POSIX Realtime extensions) Checking for functions [clock_gettime] in library [''] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_clock_gettime() { struct timespec tp; clock_gettime(CLOCK_REALTIME,&tp);; } int main() { _check_clock_gettime();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl realtime functions are linked in by default ================================================================================ TEST checkDynamic from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:331) TESTING: checkDynamic from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:331) Check for the header and libraries necessary for dynamic library manipulation Checking for functions [dlopen] in library ['dl'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char dlopen(); static void _check_dlopen() { dlopen(); } int main() { _check_dlopen();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -ldl -lstdc++ -ldl Defined "HAVE_LIBDL" to "1" Checking for header: dlfcn.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_DLFCN_H" to "1" ================================================================================ TEST configureLibraryOptions from PETSc.options.libraryOptions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/libraryOptions.py:37) TESTING: configureLibraryOptions from PETSc.options.libraryOptions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/libraryOptions.py:37) Sets PETSC_USE_DEBUG, PETSC_USE_INFO, PETSC_USE_LOG, PETSC_USE_CTABLE, PETSC_USE_FORTRAN_KERNELS, and PETSC_USE_AVX512_KERNELS Defined "USE_LOG" to "1" Executing: mpicc -qversion Defined "USE_MALLOC_COALESCED" to "1" Defined "USE_INFO" to "1" Defined "USE_CTABLE" to "1" Defined "USE_BACKWARD_LOOP" to "1" **********Checking if running on BGL/IBM detected Checking for functions [bgl_perfctr_void] in library [''] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char bgl_perfctr_void(); static void _check_bgl_perfctr_void() { bgl_perfctr_void(); } int main() { _check_bgl_perfctr_void();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_bgl_perfctr_void': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `bgl_perfctr_void' collect2: error: ld returned 1 exit status Checking for functions [ADIOI_BGL_Open] in library [''] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char ADIOI_BGL_Open(); static void _check_ADIOI_BGL_Open() { ADIOI_BGL_Open(); } int main() { _check_ADIOI_BGL_Open();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_ADIOI_BGL_Open': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `ADIOI_BGL_Open' collect2: error: ld returned 1 exit status *********BGL/IBM test failure Defined "Alignx(a,b)" to " " ================================================================================ TEST configureISColorValueType from PETSc.options.libraryOptions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/libraryOptions.py:95) TESTING: configureISColorValueType from PETSc.options.libraryOptions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/libraryOptions.py:95) Sets PETSC_IS_COLOR_VALUE_TYPE, MPIU_COLORING_VALUE, IS_COLORING_MAX required by ISColor Defined "MPIU_COLORING_VALUE" to "MPI_UNSIGNED_SHORT" Defined "IS_COLORING_MAX" to "65535" Defined "IS_COLOR_VALUE_TYPE" to "short" Defined "IS_COLOR_VALUE_TYPE_F" to "integer2" ================================================================================ TEST configureCPURelax from config.atomics(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/atomics.py:17) TESTING: configureCPURelax from config.atomics(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/atomics.py:17) Definitions for cpu relax assembly instructions All intermediate test results are stored in /tmp/petsc-wjcu960y/config.atomics Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__("rep; nop" ::: "memory");; return 0; } Defined "CPU_RELAX()" to "__asm__ __volatile__("rep; nop" ::: "memory")" ================================================================================ TEST configureMemoryBarriers from config.atomics(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/atomics.py:38) TESTING: configureMemoryBarriers from config.atomics(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/atomics.py:38) Definitions for memory barrier instructions Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("mfence":::"memory"); return 0; } Defined "MEMORY_BARRIER()" to "__asm__ __volatile__ ("mfence":::"memory")" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("lfence":::"memory"); return 0; } Defined "READ_MEMORY_BARRIER()" to "__asm__ __volatile__ ("lfence":::"memory")" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("sfence":::"memory"); return 0; } Defined "WRITE_MEMORY_BARRIER()" to "__asm__ __volatile__ ("sfence":::"memory")" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.atomics/conftest.c: Assembler messages: /tmp/petsc-wjcu960y/config.atomics/conftest.c:5: Error: no such instruction: `sync' Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("sync":::"memory"); return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.atomics/conftest.c: Assembler messages: /tmp/petsc-wjcu960y/config.atomics/conftest.c:5: Error: no such instruction: `lwsync' Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("lwsync":::"memory"); return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.atomics/conftest.c: Assembler messages: /tmp/petsc-wjcu960y/config.atomics/conftest.c:5: Error: no such instruction: `eieio' Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("eieio":::"memory"); return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.atomics/conftest.c: Assembler messages: /tmp/petsc-wjcu960y/config.atomics/conftest.c:5: Error: no such instruction: `dmb' Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("dmb":::"memory"); return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.atomics/conftest.c: Assembler messages: /tmp/petsc-wjcu960y/config.atomics/conftest.c:5: Error: no such instruction: `dmb ish' Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("dmb ish":::"memory"); return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.atomics/conftest.c: Assembler messages: /tmp/petsc-wjcu960y/config.atomics/conftest.c:5: Error: no such instruction: `dmb ishld' Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("dmb ishld":::"memory"); return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.atomics/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.atomics/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.atomics/conftest.c: Assembler messages: /tmp/petsc-wjcu960y/config.atomics/conftest.c:5: Error: no such instruction: `dmb ishst' Source: #include "confdefs.h" #include "conffix.h" int main() { __asm__ __volatile__ ("dmb ishst":::"memory"); return 0; } ================================================================================ TEST checkMemcmp from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:110) TESTING: checkMemcmp from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:110) Check for 8-bit clean memcmp Making executable to test memcmp() All intermediate test results are stored in /tmp/petsc-wjcu960y/config.functions Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.atomics -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include void exit(int); int main() { char c0 = 0x40; char c1 = (char) 0x80; char c2 = (char) 0x81; exit(memcmp(&c0, &c2, 1) < 0 && memcmp(&c1, &c2, 1) < 0 ? 0 : 1); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.functions/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.functions/conftest Executing: /tmp/petsc-wjcu960y/config.functions/conftest ================================================================================ TEST checkSysinfo from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:135) TESTING: checkSysinfo from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:135) Check whether sysinfo takes three arguments, and if it does define HAVE_SYSINFO_3ARG Checking for functions [sysinfo] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char sysinfo(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_sysinfo) || defined (__stub___sysinfo) sysinfo_will_always_fail_with_ENOSYS(); #else sysinfo(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_SYSINFO" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:13:4: error: #error "Cannot check sysinfo without special headers" # error "Cannot check sysinfo without special headers" ^~~~~ /tmp/petsc-wjcu960y/config.functions/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.functions/conftest.c:17:30: warning: implicit declaration of function ???sysinfo??? [-Wimplicit-function-declaration] char buf[10]; long count=10; sysinfo(1, buf, count); ^~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #ifdef HAVE_LINUX_KERNEL_H # include # include # ifdef HAVE_SYS_SYSINFO_H # include # endif #elif defined(HAVE_SYS_SYSTEMINFO_H) # include #else # error "Cannot check sysinfo without special headers" #endif int main() { char buf[10]; long count=10; sysinfo(1, buf, count); ; return 0; } Compile failed inside link ================================================================================ TEST checkVPrintf from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:158) TESTING: checkVPrintf from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:158) Checks whether vprintf requires a char * last argument, and if it does defines HAVE_VPRINTF_CHAR Checking for functions [vprintf] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:13:6: warning: conflicting types for built-in function ???vprintf??? [-Wbuiltin-declaration-mismatch] char vprintf(); ^~~~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char vprintf(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_vprintf) || defined (__stub___vprintf) vprintf_will_always_fail_with_ENOSYS(); #else vprintf(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_VPRINTF" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { va_list Argp; vprintf( "%d", Argp ); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl ================================================================================ TEST checkVFPrintf from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:165) TESTING: checkVFPrintf from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:165) Checks whether vfprintf requires a char * last argument, and if it does defines HAVE_VFPRINTF_CHAR Checking for functions [vfprintf] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:13:6: warning: conflicting types for built-in function ???vfprintf??? [-Wbuiltin-declaration-mismatch] char vfprintf(); ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char vfprintf(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_vfprintf) || defined (__stub___vfprintf) vfprintf_will_always_fail_with_ENOSYS(); #else vfprintf(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_VFPRINTF" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { va_list Argp; vfprintf(stdout, "%d", Argp ); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl ================================================================================ TEST checkVSNPrintf from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:172) TESTING: checkVSNPrintf from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:172) Checks whether vsnprintf requires a char * last argument, and if it does defines HAVE_VSNPRINTF_CHAR Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { va_list Argp;char str[6]; vsnprintf(str,5, "%d", Argp ); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_VSNPRINTF" to "1" ================================================================================ TEST checkNanosleep from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:202) TESTING: checkNanosleep from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:202) Check for functional nanosleep() - as time.h behaves differently for different compiler flags - like -std=c89 Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { struct timespec tp; tp.tv_sec = 0; tp.tv_nsec = (long)(1e9); nanosleep(&tp,0); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_NANOSLEEP" to "1" ================================================================================ TEST checkMemmove from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:208) TESTING: checkMemmove from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:208) Check for functional memmove() - as MS VC requires correct includes to for this test Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { char c1[1], c2[1] = "c"; size_t n=1; memmove(c1,c2,n); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_MEMMOVE" to "1" ================================================================================ TEST checkSignalHandlerType from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:178) TESTING: checkSignalHandlerType from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:178) Checks the type of C++ signals handlers, and defines SIGNAL_CAST to the correct value Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.functions -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.functions/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include static void myhandler(int sig) {} int main() { signal(SIGFPE,myhandler); ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.functions/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "SIGNAL_CAST" to " " ================================================================================ TEST checkFreeReturnType from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:188) TESTING: checkFreeReturnType from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:188) Checks whether free returns void or int, and defines HAVE_FREE_RETURN_INT Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.functions/conftest.c:6:25: error: void value not ignored as it ought to be int ierr; void *p; ierr = free(p); return 0; ^ /tmp/petsc-wjcu960y/config.functions/conftest.c:6:5: warning: variable ???ierr??? set but not used [-Wunused-but-set-variable] int ierr; void *p; ierr = free(p); return 0; ^~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int ierr; void *p; ierr = free(p); return 0; ; return 0; } Compile failed inside link ================================================================================ TEST checkVariableArgumentLists from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:194) TESTING: checkVariableArgumentLists from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:194) Checks whether the variable argument list functionality is working Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { va_list l1, l2; va_copy(l1, l2); return 0; ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_VA_COPY" to "1" ================================================================================ TEST checkClassify from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:89) TESTING: checkClassify from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:89) Recursive decompose to rapidly classify functions as found or missing To confirm that a function is missing, we require a compile/link failure with only that function in a compilation unit. In contrast, we can confirm that many functions are present by compiling them all together in a large compilation unit. We optimistically compile everything together, then trim all functions that were named in the error message and bisect the result. The trimming is only an optimization to increase the likelihood of a big-batch compile succeeding; we do not rely on the compiler naming missing functions. Checking for functions [uname bzero usleep gettimeofday _sleep gethostbyname snprintf getrusage socket getdomainname _snprintf _getcwd sbreak _access _set_output_format dlclose strcasecmp gethostname sysctlbyname signal sigaction times drand48 mkstemp lseek access PXFGETARG dlopen readlink dlsym _lseek getcwd fork clock stricmp _mkdir realpath popen time rand memalign sigset sleep getwd dlerror getpagesize get_nprocs] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:14:6: warning: conflicting types for built-in function ???bzero??? [-Wbuiltin-declaration-mismatch] char bzero(); ^~~~~ /tmp/petsc-wjcu960y/config.functions/conftest.c:19:6: warning: conflicting types for built-in function ???snprintf??? [-Wbuiltin-declaration-mismatch] char snprintf(); ^~~~~~~~ /tmp/petsc-wjcu960y/config.functions/conftest.c:29:6: warning: conflicting types for built-in function ???strcasecmp??? [-Wbuiltin-declaration-mismatch] char strcasecmp(); ^~~~~~~~~~ /tmp/petsc-wjcu960y/config.functions/conftest.c:45:6: warning: conflicting types for built-in function ???fork??? [-Wbuiltin-declaration-mismatch] char fork(); ^~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char uname(); char bzero(); char usleep(); char gettimeofday(); char _sleep(); char gethostbyname(); char snprintf(); char getrusage(); char socket(); char getdomainname(); char _snprintf(); char _getcwd(); char sbreak(); char _access(); char _set_output_format(); char dlclose(); char strcasecmp(); char gethostname(); char sysctlbyname(); char signal(); char sigaction(); char times(); char drand48(); char mkstemp(); char lseek(); char access(); char PXFGETARG(); char dlopen(); char readlink(); char dlsym(); char _lseek(); char getcwd(); char fork(); char clock(); char stricmp(); char _mkdir(); char realpath(); char popen(); char time(); char rand(); char memalign(); char sigset(); char sleep(); char getwd(); char dlerror(); char getpagesize(); char get_nprocs(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_uname) || defined (__stub___uname) uname_will_always_fail_with_ENOSYS(); #else uname(); #endif #if defined (__stub_bzero) || defined (__stub___bzero) bzero_will_always_fail_with_ENOSYS(); #else bzero(); #endif #if defined (__stub_usleep) || defined (__stub___usleep) usleep_will_always_fail_with_ENOSYS(); #else usleep(); #endif #if defined (__stub_gettimeofday) || defined (__stub___gettimeofday) gettimeofday_will_always_fail_with_ENOSYS(); #else gettimeofday(); #endif #if defined (__stub__sleep) || defined (__stub____sleep) _sleep_will_always_fail_with_ENOSYS(); #else _sleep(); #endif #if defined (__stub_gethostbyname) || defined (__stub___gethostbyname) gethostbyname_will_always_fail_with_ENOSYS(); #else gethostbyname(); #endif #if defined (__stub_snprintf) || defined (__stub___snprintf) snprintf_will_always_fail_with_ENOSYS(); #else snprintf(); #endif #if defined (__stub_getrusage) || defined (__stub___getrusage) getrusage_will_always_fail_with_ENOSYS(); #else getrusage(); #endif #if defined (__stub_socket) || defined (__stub___socket) socket_will_always_fail_with_ENOSYS(); #else socket(); #endif #if defined (__stub_getdomainname) || defined (__stub___getdomainname) getdomainname_will_always_fail_with_ENOSYS(); #else getdomainname(); #endif #if defined (__stub__snprintf) || defined (__stub____snprintf) _snprintf_will_always_fail_with_ENOSYS(); #else _snprintf(); #endif #if defined (__stub__getcwd) || defined (__stub____getcwd) _getcwd_will_always_fail_with_ENOSYS(); #else _getcwd(); #endif #if defined (__stub_sbreak) || defined (__stub___sbreak) sbreak_will_always_fail_with_ENOSYS(); #else sbreak(); #endif #if defined (__stub__access) || defined (__stub____access) _access_will_always_fail_with_ENOSYS(); #else _access(); #endif #if defined (__stub__set_output_format) || defined (__stub____set_output_format) _set_output_format_will_always_fail_with_ENOSYS(); #else _set_output_format(); #endif #if defined (__stub_dlclose) || defined (__stub___dlclose) dlclose_will_always_fail_with_ENOSYS(); #else dlclose(); #endif #if defined (__stub_strcasecmp) || defined (__stub___strcasecmp) strcasecmp_will_always_fail_with_ENOSYS(); #else strcasecmp(); #endif #if defined (__stub_gethostname) || defined (__stub___gethostname) gethostname_will_always_fail_with_ENOSYS(); #else gethostname(); #endif #if defined (__stub_sysctlbyname) || defined (__stub___sysctlbyname) sysctlbyname_will_always_fail_with_ENOSYS(); #else sysctlbyname(); #endif #if defined (__stub_signal) || defined (__stub___signal) signal_will_always_fail_with_ENOSYS(); #else signal(); #endif #if defined (__stub_sigaction) || defined (__stub___sigaction) sigaction_will_always_fail_with_ENOSYS(); #else sigaction(); #endif #if defined (__stub_times) || defined (__stub___times) times_will_always_fail_with_ENOSYS(); #else times(); #endif #if defined (__stub_drand48) || defined (__stub___drand48) drand48_will_always_fail_with_ENOSYS(); #else drand48(); #endif #if defined (__stub_mkstemp) || defined (__stub___mkstemp) mkstemp_will_always_fail_with_ENOSYS(); #else mkstemp(); #endif #if defined (__stub_lseek) || defined (__stub___lseek) lseek_will_always_fail_with_ENOSYS(); #else lseek(); #endif #if defined (__stub_access) || defined (__stub___access) access_will_always_fail_with_ENOSYS(); #else access(); #endif #if defined (__stub_PXFGETARG) || defined (__stub___PXFGETARG) PXFGETARG_will_always_fail_with_ENOSYS(); #else PXFGETARG(); #endif #if defined (__stub_dlopen) || defined (__stub___dlopen) dlopen_will_always_fail_with_ENOSYS(); #else dlopen(); #endif #if defined (__stub_readlink) || defined (__stub___readlink) readlink_will_always_fail_with_ENOSYS(); #else readlink(); #endif #if defined (__stub_dlsym) || defined (__stub___dlsym) dlsym_will_always_fail_with_ENOSYS(); #else dlsym(); #endif #if defined (__stub__lseek) || defined (__stub____lseek) _lseek_will_always_fail_with_ENOSYS(); #else _lseek(); #endif #if defined (__stub_getcwd) || defined (__stub___getcwd) getcwd_will_always_fail_with_ENOSYS(); #else getcwd(); #endif #if defined (__stub_fork) || defined (__stub___fork) fork_will_always_fail_with_ENOSYS(); #else fork(); #endif #if defined (__stub_clock) || defined (__stub___clock) clock_will_always_fail_with_ENOSYS(); #else clock(); #endif #if defined (__stub_stricmp) || defined (__stub___stricmp) stricmp_will_always_fail_with_ENOSYS(); #else stricmp(); #endif #if defined (__stub__mkdir) || defined (__stub____mkdir) _mkdir_will_always_fail_with_ENOSYS(); #else _mkdir(); #endif #if defined (__stub_realpath) || defined (__stub___realpath) realpath_will_always_fail_with_ENOSYS(); #else realpath(); #endif #if defined (__stub_popen) || defined (__stub___popen) popen_will_always_fail_with_ENOSYS(); #else popen(); #endif #if defined (__stub_time) || defined (__stub___time) time_will_always_fail_with_ENOSYS(); #else time(); #endif #if defined (__stub_rand) || defined (__stub___rand) rand_will_always_fail_with_ENOSYS(); #else rand(); #endif #if defined (__stub_memalign) || defined (__stub___memalign) memalign_will_always_fail_with_ENOSYS(); #else memalign(); #endif #if defined (__stub_sigset) || defined (__stub___sigset) sigset_will_always_fail_with_ENOSYS(); #else sigset(); #endif #if defined (__stub_sleep) || defined (__stub___sleep) sleep_will_always_fail_with_ENOSYS(); #else sleep(); #endif #if defined (__stub_getwd) || defined (__stub___getwd) getwd_will_always_fail_with_ENOSYS(); #else getwd(); #endif #if defined (__stub_dlerror) || defined (__stub___dlerror) dlerror_will_always_fail_with_ENOSYS(); #else dlerror(); #endif #if defined (__stub_getpagesize) || defined (__stub___getpagesize) getpagesize_will_always_fail_with_ENOSYS(); #else getpagesize(); #endif #if defined (__stub_get_nprocs) || defined (__stub___get_nprocs) get_nprocs_will_always_fail_with_ENOSYS(); #else get_nprocs(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:328: warning: the `getwd' function is dangerous and should not be used. /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:94: undefined reference to `_sleep' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:130: undefined reference to `_snprintf' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:136: undefined reference to `_getcwd' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:142: undefined reference to `sbreak' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:148: undefined reference to `_access' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:154: undefined reference to `_set_output_format' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:178: undefined reference to `sysctlbyname' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:226: undefined reference to `PXFGETARG' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:250: undefined reference to `_lseek' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:274: undefined reference to `stricmp' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.c:280: undefined reference to `_mkdir' collect2: error: ld returned 1 exit status Checking for functions [uname bzero usleep gettimeofday gethostbyname getrusage socket getdomainname dlclose strcasecmp gethostname signal sigaction times drand48] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:14:6: warning: conflicting types for built-in function ???bzero??? [-Wbuiltin-declaration-mismatch] char bzero(); ^~~~~ /tmp/petsc-wjcu960y/config.functions/conftest.c:22:6: warning: conflicting types for built-in function ???strcasecmp??? [-Wbuiltin-declaration-mismatch] char strcasecmp(); ^~~~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char uname(); char bzero(); char usleep(); char gettimeofday(); char gethostbyname(); char getrusage(); char socket(); char getdomainname(); char dlclose(); char strcasecmp(); char gethostname(); char signal(); char sigaction(); char times(); char drand48(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_uname) || defined (__stub___uname) uname_will_always_fail_with_ENOSYS(); #else uname(); #endif #if defined (__stub_bzero) || defined (__stub___bzero) bzero_will_always_fail_with_ENOSYS(); #else bzero(); #endif #if defined (__stub_usleep) || defined (__stub___usleep) usleep_will_always_fail_with_ENOSYS(); #else usleep(); #endif #if defined (__stub_gettimeofday) || defined (__stub___gettimeofday) gettimeofday_will_always_fail_with_ENOSYS(); #else gettimeofday(); #endif #if defined (__stub_gethostbyname) || defined (__stub___gethostbyname) gethostbyname_will_always_fail_with_ENOSYS(); #else gethostbyname(); #endif #if defined (__stub_getrusage) || defined (__stub___getrusage) getrusage_will_always_fail_with_ENOSYS(); #else getrusage(); #endif #if defined (__stub_socket) || defined (__stub___socket) socket_will_always_fail_with_ENOSYS(); #else socket(); #endif #if defined (__stub_getdomainname) || defined (__stub___getdomainname) getdomainname_will_always_fail_with_ENOSYS(); #else getdomainname(); #endif #if defined (__stub_dlclose) || defined (__stub___dlclose) dlclose_will_always_fail_with_ENOSYS(); #else dlclose(); #endif #if defined (__stub_strcasecmp) || defined (__stub___strcasecmp) strcasecmp_will_always_fail_with_ENOSYS(); #else strcasecmp(); #endif #if defined (__stub_gethostname) || defined (__stub___gethostname) gethostname_will_always_fail_with_ENOSYS(); #else gethostname(); #endif #if defined (__stub_signal) || defined (__stub___signal) signal_will_always_fail_with_ENOSYS(); #else signal(); #endif #if defined (__stub_sigaction) || defined (__stub___sigaction) sigaction_will_always_fail_with_ENOSYS(); #else sigaction(); #endif #if defined (__stub_times) || defined (__stub___times) times_will_always_fail_with_ENOSYS(); #else times(); #endif #if defined (__stub_drand48) || defined (__stub___drand48) drand48_will_always_fail_with_ENOSYS(); #else drand48(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_UNAME" to "1" Defined "HAVE_BZERO" to "1" Defined "HAVE_USLEEP" to "1" Defined "HAVE_GETTIMEOFDAY" to "1" Defined "HAVE_GETHOSTBYNAME" to "1" Defined "HAVE_GETRUSAGE" to "1" Defined "HAVE_SOCKET" to "1" Defined "HAVE_GETDOMAINNAME" to "1" Defined "HAVE_DLCLOSE" to "1" Defined "HAVE_STRCASECMP" to "1" Defined "HAVE_GETHOSTNAME" to "1" Defined "HAVE_SIGNAL" to "1" Defined "HAVE_SIGACTION" to "1" Defined "HAVE_TIMES" to "1" Defined "HAVE_DRAND48" to "1" Checking for functions [mkstemp dlopen readlink dlsym fork clock realpath popen time rand memalign sigset dlerror getpagesize get_nprocs] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:17:6: warning: conflicting types for built-in function ???fork??? [-Wbuiltin-declaration-mismatch] char fork(); ^~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char mkstemp(); char dlopen(); char readlink(); char dlsym(); char fork(); char clock(); char realpath(); char popen(); char time(); char rand(); char memalign(); char sigset(); char dlerror(); char getpagesize(); char get_nprocs(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_mkstemp) || defined (__stub___mkstemp) mkstemp_will_always_fail_with_ENOSYS(); #else mkstemp(); #endif #if defined (__stub_dlopen) || defined (__stub___dlopen) dlopen_will_always_fail_with_ENOSYS(); #else dlopen(); #endif #if defined (__stub_readlink) || defined (__stub___readlink) readlink_will_always_fail_with_ENOSYS(); #else readlink(); #endif #if defined (__stub_dlsym) || defined (__stub___dlsym) dlsym_will_always_fail_with_ENOSYS(); #else dlsym(); #endif #if defined (__stub_fork) || defined (__stub___fork) fork_will_always_fail_with_ENOSYS(); #else fork(); #endif #if defined (__stub_clock) || defined (__stub___clock) clock_will_always_fail_with_ENOSYS(); #else clock(); #endif #if defined (__stub_realpath) || defined (__stub___realpath) realpath_will_always_fail_with_ENOSYS(); #else realpath(); #endif #if defined (__stub_popen) || defined (__stub___popen) popen_will_always_fail_with_ENOSYS(); #else popen(); #endif #if defined (__stub_time) || defined (__stub___time) time_will_always_fail_with_ENOSYS(); #else time(); #endif #if defined (__stub_rand) || defined (__stub___rand) rand_will_always_fail_with_ENOSYS(); #else rand(); #endif #if defined (__stub_memalign) || defined (__stub___memalign) memalign_will_always_fail_with_ENOSYS(); #else memalign(); #endif #if defined (__stub_sigset) || defined (__stub___sigset) sigset_will_always_fail_with_ENOSYS(); #else sigset(); #endif #if defined (__stub_dlerror) || defined (__stub___dlerror) dlerror_will_always_fail_with_ENOSYS(); #else dlerror(); #endif #if defined (__stub_getpagesize) || defined (__stub___getpagesize) getpagesize_will_always_fail_with_ENOSYS(); #else getpagesize(); #endif #if defined (__stub_get_nprocs) || defined (__stub___get_nprocs) get_nprocs_will_always_fail_with_ENOSYS(); #else get_nprocs(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_MKSTEMP" to "1" Defined "HAVE_DLOPEN" to "1" Defined "HAVE_READLINK" to "1" Defined "HAVE_DLSYM" to "1" Defined "HAVE_FORK" to "1" Defined "HAVE_CLOCK" to "1" Defined "HAVE_REALPATH" to "1" Defined "HAVE_POPEN" to "1" Defined "HAVE_TIME" to "1" Defined "HAVE_RAND" to "1" Defined "HAVE_MEMALIGN" to "1" Defined "HAVE_SIGSET" to "1" Defined "HAVE_DLERROR" to "1" Defined "HAVE_GETPAGESIZE" to "1" Defined "HAVE_GET_NPROCS" to "1" Checking for functions [_sleep] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _sleep(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__sleep) || defined (__stub____sleep) _sleep_will_always_fail_with_ENOSYS(); #else _sleep(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `_sleep' collect2: error: ld returned 1 exit status Checking for functions [snprintf] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:13:6: warning: conflicting types for built-in function ???snprintf??? [-Wbuiltin-declaration-mismatch] char snprintf(); ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char snprintf(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_snprintf) || defined (__stub___snprintf) snprintf_will_always_fail_with_ENOSYS(); #else snprintf(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_SNPRINTF" to "1" Checking for functions [_snprintf] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _snprintf(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__snprintf) || defined (__stub____snprintf) _snprintf_will_always_fail_with_ENOSYS(); #else _snprintf(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `_snprintf' collect2: error: ld returned 1 exit status Checking for functions [_getcwd] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _getcwd(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__getcwd) || defined (__stub____getcwd) _getcwd_will_always_fail_with_ENOSYS(); #else _getcwd(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `_getcwd' collect2: error: ld returned 1 exit status Checking for functions [sbreak] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char sbreak(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_sbreak) || defined (__stub___sbreak) sbreak_will_always_fail_with_ENOSYS(); #else sbreak(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `sbreak' collect2: error: ld returned 1 exit status Checking for functions [_access] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _access(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__access) || defined (__stub____access) _access_will_always_fail_with_ENOSYS(); #else _access(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `_access' collect2: error: ld returned 1 exit status Checking for functions [_set_output_format] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _set_output_format(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__set_output_format) || defined (__stub____set_output_format) _set_output_format_will_always_fail_with_ENOSYS(); #else _set_output_format(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `_set_output_format' collect2: error: ld returned 1 exit status Checking for functions [sysctlbyname] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char sysctlbyname(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_sysctlbyname) || defined (__stub___sysctlbyname) sysctlbyname_will_always_fail_with_ENOSYS(); #else sysctlbyname(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `sysctlbyname' collect2: error: ld returned 1 exit status Checking for functions [lseek] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char lseek(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_lseek) || defined (__stub___lseek) lseek_will_always_fail_with_ENOSYS(); #else lseek(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_LSEEK" to "1" Checking for functions [access] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char access(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_access) || defined (__stub___access) access_will_always_fail_with_ENOSYS(); #else access(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_ACCESS" to "1" Checking for functions [PXFGETARG] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PXFGETARG(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_PXFGETARG) || defined (__stub___PXFGETARG) PXFGETARG_will_always_fail_with_ENOSYS(); #else PXFGETARG(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `PXFGETARG' collect2: error: ld returned 1 exit status Checking for functions [_lseek] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _lseek(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__lseek) || defined (__stub____lseek) _lseek_will_always_fail_with_ENOSYS(); #else _lseek(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `_lseek' collect2: error: ld returned 1 exit status Checking for functions [getcwd] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char getcwd(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_getcwd) || defined (__stub___getcwd) getcwd_will_always_fail_with_ENOSYS(); #else getcwd(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_GETCWD" to "1" Checking for functions [stricmp] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char stricmp(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_stricmp) || defined (__stub___stricmp) stricmp_will_always_fail_with_ENOSYS(); #else stricmp(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `stricmp' collect2: error: ld returned 1 exit status Checking for functions [_mkdir] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _mkdir(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__mkdir) || defined (__stub____mkdir) _mkdir_will_always_fail_with_ENOSYS(); #else _mkdir(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `_mkdir' collect2: error: ld returned 1 exit status Checking for functions [sleep] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char sleep(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_sleep) || defined (__stub___sleep) sleep_will_always_fail_with_ENOSYS(); #else sleep(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_SLEEP" to "1" Checking for functions [getwd] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char getwd(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_getwd) || defined (__stub___getwd) getwd_will_always_fail_with_ENOSYS(); #else getwd(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: warning: the `getwd' function is dangerous and should not be used. Defined "HAVE_GETWD" to "1" ================================================================================ TEST checkMmap from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:214) TESTING: checkMmap from config.functions(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/functions.py:214) Check for functional mmap() to allocate shared memory and define HAVE_MMAP Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include int main() { int fd; fd=open("/tmp/file",O_RDWR); mmap((void*)0,100,PROT_READ|PROT_WRITE,MAP_SHARED,fd,0); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_MMAP" to "1" ================================================================================ TEST configureMemorySize from config.utilities.getResidentSetSize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/getResidentSetSize.py:31) TESTING: configureMemorySize from config.utilities.getResidentSetSize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/getResidentSetSize.py:31) Try to determine how to measure the memory usage Defined "USE_PROC_FOR_SIZE" to "1" Using /proc for PetscMemoryGetCurrentUsage() ================================================================================ TEST configureFortranCommandLine from config.utilities.fortranCommandLine(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/fortranCommandLine.py:26) TESTING: configureFortranCommandLine from config.utilities.fortranCommandLine(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/fortranCommandLine.py:26) Check for the mechanism to retrieve command line arguments in Fortran Defined "HAVE_FORTRAN_GET_COMMAND_ARGUMENT" to "1" Checking for functions [] in library [''] [] Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.F90 Successful compile: Source: program main integer i character*(80) arg i = command_argument_count() call get_command_argument(i,arg) end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Defined "HAVE_GFORTRAN_IARGC" to "1" Checking for functions [get_command_argument_] in library [''] ['-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char get_command_argument_(); static void _check_get_command_argument_() { get_command_argument_(); } int main() { _check_get_command_argument_();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_get_command_argument_': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `get_command_argument_' collect2: error: ld returned 1 exit status Checking for functions [getarg_] in library [''] ['-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char getarg_(); static void _check_getarg_() { getarg_(); } int main() { _check_getarg_();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_getarg_': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `getarg_' collect2: error: ld returned 1 exit status Checking for functions [ipxfargc_] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char ipxfargc_(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_ipxfargc_) || defined (__stub___ipxfargc_) ipxfargc__will_always_fail_with_ENOSYS(); #else ipxfargc_(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `ipxfargc_' collect2: error: ld returned 1 exit status Checking for functions [f90_unix_MP_iargc] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char f90_unix_MP_iargc(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_f90_unix_MP_iargc) || defined (__stub___f90_unix_MP_iargc) f90_unix_MP_iargc_will_always_fail_with_ENOSYS(); #else f90_unix_MP_iargc(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `f90_unix_MP_iargc' collect2: error: ld returned 1 exit status Checking for functions [PXFGETARG] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PXFGETARG(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_PXFGETARG) || defined (__stub___PXFGETARG) PXFGETARG_will_always_fail_with_ENOSYS(); #else PXFGETARG(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `PXFGETARG' collect2: error: ld returned 1 exit status Checking for functions [iargc_] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char iargc_(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_iargc_) || defined (__stub___iargc_) iargc__will_always_fail_with_ENOSYS(); #else iargc_(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.functions/conftest.o: in function `main': /tmp/petsc-wjcu960y/config.functions/conftest.c:24: undefined reference to `iargc_' collect2: error: ld returned 1 exit status Checking for functions [GETARG at 16] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.functions/conftest.c:13:12: error: stray ???@??? in program char GETARG at 16(); ^ /tmp/petsc-wjcu960y/config.functions/conftest.c:13:13: error: expected ???=???, ???,???, ???;???, ???asm??? or ???__attribute__??? before numeric constant char GETARG at 16(); ^~ /tmp/petsc-wjcu960y/config.functions/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.functions/conftest.c:21:27: error: missing ')' after "defined" #if defined (__stub_GETARG at 16) || defined (__stub___GETARG at 16) ^ /tmp/petsc-wjcu960y/config.functions/conftest.c:21:28: error: missing binary operator before token "16" #if defined (__stub_GETARG at 16) || defined (__stub___GETARG at 16) ^~ /tmp/petsc-wjcu960y/config.functions/conftest.c:24:7: error: stray ???@??? in program GETARG at 16(); ^ /tmp/petsc-wjcu960y/config.functions/conftest.c:24:1: error: ???GETARG??? undeclared (first use in this function) GETARG at 16(); ^~~~~~ /tmp/petsc-wjcu960y/config.functions/conftest.c:24:1: note: each undeclared identifier is reported only once for each function it appears in /tmp/petsc-wjcu960y/config.functions/conftest.c:24:8: error: expected ???;??? before numeric constant GETARG at 16(); ^~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char GETARG at 16(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_GETARG at 16) || defined (__stub___GETARG at 16) GETARG at 16_will_always_fail_with_ENOSYS(); #else GETARG at 16(); #endif ; return 0; } Compile failed inside link Checking for functions [_gfortran_iargc] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.functions/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _gfortran_iargc(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__gfortran_iargc) || defined (__stub____gfortran_iargc) _gfortran_iargc_will_always_fail_with_ENOSYS(); #else _gfortran_iargc(); #endif ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.functions/conftest.o -lstdc++ -ldl -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath Defined "HAVE__GFORTRAN_IARGC" to "1" ================================================================================ TEST configureFeatureTestMacros from config.utilities.featureTestMacros(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/featureTestMacros.py:13) TESTING: configureFeatureTestMacros from config.utilities.featureTestMacros(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/featureTestMacros.py:13) Checks if certain feature test macros are support All intermediate test results are stored in /tmp/petsc-wjcu960y/config.utilities.featureTestMacros Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.c:4:10: fatal error: sysctl.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #define _POSIX_C_SOURCE 200112L #include int main() { ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.c Possible ERROR while running compiler: stderr: In file included from /usr/include/bits/libc-header-start.h:33:0, from /usr/include/stdlib.h:25, from /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.c:4: /usr/include/features.h:183:3: warning: #warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" [-Wcpp] # warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" ^~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #define _BSD_SOURCE #include int main() { ; return 0; } Defined "_BSD_SOURCE" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #define _DEFAULT_SOURCE #include int main() { ; return 0; } Defined "_DEFAULT_SOURCE" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.featureTestMacros/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #define _GNU_SOURCE #include int main() { cpu_set_t mset; CPU_ZERO(&mset);; return 0; } Defined "_GNU_SOURCE" to "1" ================================================================================ TEST configureMissingDefines from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:56) TESTING: configureMissingDefines from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:56) Checks for limits All intermediate test results are stored in /tmp/petsc-wjcu960y/config.utilities.missing Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_LIMITS_H #include #endif int main() { int i=INT_MAX; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_FLOAT_H #include #endif int main() { double d=DBL_MAX; if (d); ; return 0; } ================================================================================ TEST configureMissingUtypeTypedefs from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:66) TESTING: configureMissingUtypeTypedefs from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:66) Checks if u_short is undefined Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c:6:9: warning: unused variable ???foo??? [-Wunused-variable] u_short foo; ^~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { u_short foo; ; return 0; } ================================================================================ TEST configureMissingFunctions from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:72) TESTING: configureMissingFunctions from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:72) Checks for SOCKETS ================================================================================ TEST configureMissingSignals from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:92) TESTING: configureMissingSignals from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:92) Check for missing signals, and define MISSING_ if necessary Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGABRT; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGALRM; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGBUS; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGCHLD; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGCONT; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGFPE; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGHUP; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGILL; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGINT; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGKILL; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGPIPE; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGQUIT; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGSEGV; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGSTOP; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGSYS; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGTERM; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGTRAP; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGTSTP; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGURG; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGUSR1; if (i); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGUSR2; if (i); ; return 0; } ================================================================================ TEST configureMissingGetdomainnamePrototype from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:109) TESTING: configureMissingGetdomainnamePrototype from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:109) Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #ifdef PETSC_HAVE_UNISTD_H #include #endif #ifdef PETSC_HAVE_NETDB_H #include #endif int main() { int (*getdomainname_ptr)(char*,size_t) = getdomainname; char test[10]; if (getdomainname_ptr(test,10)) return 1; ; return 0; } Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.missing -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.utilities.missing/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #ifdef PETSC_HAVE_UNISTD_H #include #endif #ifdef PETSC_HAVE_NETDB_H #include #endif int main() { int (*getdomainname_ptr)(char*,size_t) = getdomainname; char test[10]; if (getdomainname_ptr(test,10)) return 1; ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -lstdc++ -ldl ================================================================================ TEST configureMissingSrandPrototype from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:134) TESTING: configureMissingSrandPrototype from config.utilities.missing(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/missing.py:134) Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #ifdef PETSC_HAVE_STDLIB_H #include #endif int main() { double (*drand48_ptr)(void) = drand48; void (*srand48_ptr)(long int) = srand48; long int seed=10; srand48_ptr(seed); if (drand48_ptr() > 0.5) return 1; ; return 0; } Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.missing -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.utilities.missing/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #ifdef PETSC_HAVE_STDLIB_H #include #endif int main() { double (*drand48_ptr)(void) = drand48; void (*srand48_ptr)(long int) = srand48; long int seed=10; srand48_ptr(seed); if (drand48_ptr() > 0.5) return 1; ; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.utilities.missing/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.utilities.missing/conftest.o -lstdc++ -ldl ================================================================================ TEST configureFPTrap from config.utilities.FPTrap(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/FPTrap.py:27) TESTING: configureFPTrap from config.utilities.FPTrap(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/utilities/FPTrap.py:27) Checking the handling of floating point traps Checking for header: sigfpe.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sigfpe.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sigfpe.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: sigfpe.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: Checking for header: fpxcp.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: fpxcp.h: No such file or directory #include ^~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: fpxcp.h: No such file or directory #include ^~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: fpxcp.h: No such file or directory #include ^~~~~~~~~compilation terminated.: Checking for header: floatingpoint.h Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: floatingpoint.h: No such file or directory #include ^~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: floatingpoint.h: No such file or directory #include ^~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: floatingpoint.h: No such file or directory #include ^~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST configureScalarType from PETSc.options.scalarTypes(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/scalarTypes.py:36) TESTING: configureScalarType from PETSc.options.scalarTypes(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/scalarTypes.py:36) Choose between real and complex numbers Defined "USE_SCALAR_REAL" to "1" Scalar type is real All intermediate test results are stored in /tmp/petsc-wjcu960y/PETSc.options.scalarTypes Executing: mpicc -c -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6:21: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0; int a = isnormal(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0; int a = isnormal(b); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Defined "HAVE_ISNORMAL" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6:21: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0; int a = isnan(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0; int a = isnan(b); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Defined "HAVE_ISNAN" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6:21: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0; int a = isinf(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0; int a = isinf(b); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Defined "HAVE_ISINF" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6:24: warning: implicit declaration of function ???_isnan??? [-Wimplicit-function-declaration] double b = 2.0;int a = _isnan(b); ^~~~~~ /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6:20: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0;int a = _isnan(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0;int a = _isnan(b); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o: in function `main': /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6: undefined reference to `_isnan' collect2: error: ld returned 1 exit status Executing: mpicc -c -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6:24: warning: implicit declaration of function ???_finite??? [-Wimplicit-function-declaration] double b = 2.0;int a = _finite(b); ^~~~~~~ /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6:20: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0;int a = _finite(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0;int a = _finite(b); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.o: in function `main': /tmp/petsc-wjcu960y/PETSc.options.scalarTypes/conftest.c:6: undefined reference to `_finite' collect2: error: ld returned 1 exit status ================================================================================ TEST configurePrecision from PETSc.options.scalarTypes(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/scalarTypes.py:78) TESTING: configurePrecision from PETSc.options.scalarTypes(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/PETSc/options/scalarTypes.py:78) Set the default real number precision for PETSc objects Defined "USE_REAL_DOUBLE" to "1" Defined make macro "PETSC_SCALAR_SIZE" to "64" Precision is double ================================================================================ TEST configureMkdir from config.programs(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/programs.py:22) TESTING: configureMkdir from config.programs(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/programs.py:22) Make sure we can have mkdir automatically make intermediate directories Checking for program /home/wangzl/miniconda3/bin/mkdir...not found Checking for program /home/wangzl/projects/moose/python/peacock/mkdir...not found Checking for program /home/wangzl/software/fftwmpi/bin/mkdir...not found Checking for program /home/wangzl/software/byacc/bin/mkdir...not found Checking for program /home/wangzl/software/m4/bin/mkdir...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mkdir...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mkdir...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mkdir...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mkdir...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mkdir...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mkdir...not found Checking for program /home/wangzl/miniconda3/bin/mkdir...not found Checking for program /home/wangzl/projects/moose/python/peacock/mkdir...not found Checking for program /home/wangzl/software/fftwmpi/bin/mkdir...not found Checking for program /home/wangzl/software/byacc/bin/mkdir...not found Checking for program /home/wangzl/software/m4/bin/mkdir...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mkdir...not found Checking for program /home/wangzl/bin/mkdir...not found Checking for program /usr/local/bin/mkdir...not found Checking for program /usr/bin/mkdir...found Executing: /usr/bin/mkdir -p .conftest/tmp Adding -p flag to /usr/bin/mkdir -p to automatically create directories Defined make macro "MKDIR" to "/usr/bin/mkdir -p" ================================================================================ TEST configureAutoreconf from config.programs(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/programs.py:44) TESTING: configureAutoreconf from config.programs(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/programs.py:44) Check for autoreconf Checking for program /home/wangzl/miniconda3/bin/autoreconf...not found Checking for program /home/wangzl/projects/moose/python/peacock/autoreconf...not found Checking for program /home/wangzl/software/fftwmpi/bin/autoreconf...not found Checking for program /home/wangzl/software/byacc/bin/autoreconf...not found Checking for program /home/wangzl/software/m4/bin/autoreconf...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/autoreconf...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/autoreconf...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/autoreconf...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/autoreconf...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/autoreconf...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/autoreconf...not found Checking for program /home/wangzl/miniconda3/bin/autoreconf...not found Checking for program /home/wangzl/projects/moose/python/peacock/autoreconf...not found Checking for program /home/wangzl/software/fftwmpi/bin/autoreconf...not found Checking for program /home/wangzl/software/byacc/bin/autoreconf...not found Checking for program /home/wangzl/software/m4/bin/autoreconf...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/autoreconf...not found Checking for program /home/wangzl/bin/autoreconf...not found Checking for program /usr/local/bin/autoreconf...not found Checking for program /usr/bin/autoreconf...found All intermediate test results are stored in /tmp/petsc-wjcu960y/config.programs Executing: ['/usr/bin/autoreconf'] autoreconf test successful! Checking for program /home/wangzl/miniconda3/bin/libtoolize...not found Checking for program /home/wangzl/projects/moose/python/peacock/libtoolize...not found Checking for program /home/wangzl/software/fftwmpi/bin/libtoolize...not found Checking for program /home/wangzl/software/byacc/bin/libtoolize...not found Checking for program /home/wangzl/software/m4/bin/libtoolize...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/libtoolize...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/libtoolize...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/libtoolize...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/libtoolize...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/libtoolize...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/libtoolize...not found Checking for program /home/wangzl/miniconda3/bin/libtoolize...not found Checking for program /home/wangzl/projects/moose/python/peacock/libtoolize...not found Checking for program /home/wangzl/software/fftwmpi/bin/libtoolize...not found Checking for program /home/wangzl/software/byacc/bin/libtoolize...not found Checking for program /home/wangzl/software/m4/bin/libtoolize...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/libtoolize...not found Checking for program /home/wangzl/bin/libtoolize...not found Checking for program /usr/local/bin/libtoolize...not found Checking for program /usr/bin/libtoolize...found ================================================================================ TEST configurePrograms from config.programs(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/programs.py:71) TESTING: configurePrograms from config.programs(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/programs.py:71) Check for the programs needed to build and run PETSc Checking for program /home/wangzl/miniconda3/bin/sh...not found Checking for program /home/wangzl/projects/moose/python/peacock/sh...not found Checking for program /home/wangzl/software/fftwmpi/bin/sh...not found Checking for program /home/wangzl/software/byacc/bin/sh...not found Checking for program /home/wangzl/software/m4/bin/sh...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/sh...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/sh...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/sh...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/sh...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/sh...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/sh...not found Checking for program /home/wangzl/miniconda3/bin/sh...not found Checking for program /home/wangzl/projects/moose/python/peacock/sh...not found Checking for program /home/wangzl/software/fftwmpi/bin/sh...not found Checking for program /home/wangzl/software/byacc/bin/sh...not found Checking for program /home/wangzl/software/m4/bin/sh...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/sh...not found Checking for program /home/wangzl/bin/sh...not found Checking for program /usr/local/bin/sh...not found Checking for program /usr/bin/sh...found Defined make macro "SHELL" to "/usr/bin/sh" Checking for program /home/wangzl/miniconda3/bin/sed...not found Checking for program /home/wangzl/projects/moose/python/peacock/sed...not found Checking for program /home/wangzl/software/fftwmpi/bin/sed...not found Checking for program /home/wangzl/software/byacc/bin/sed...not found Checking for program /home/wangzl/software/m4/bin/sed...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/sed...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/sed...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/sed...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/sed...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/sed...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/sed...not found Checking for program /home/wangzl/miniconda3/bin/sed...not found Checking for program /home/wangzl/projects/moose/python/peacock/sed...not found Checking for program /home/wangzl/software/fftwmpi/bin/sed...not found Checking for program /home/wangzl/software/byacc/bin/sed...not found Checking for program /home/wangzl/software/m4/bin/sed...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/sed...not found Checking for program /home/wangzl/bin/sed...not found Checking for program /usr/local/bin/sed...not found Checking for program /usr/bin/sed...found Defined make macro "SED" to "/usr/bin/sed" Executing: /usr/bin/sed -i s/sed/sd/g "/tmp/petsc-wjcu960y/config.programs/sed1" Adding SEDINPLACE cmd: /usr/bin/sed -i Defined make macro "SEDINPLACE" to "/usr/bin/sed -i" Checking for program /home/wangzl/miniconda3/bin/mv...not found Checking for program /home/wangzl/projects/moose/python/peacock/mv...not found Checking for program /home/wangzl/software/fftwmpi/bin/mv...not found Checking for program /home/wangzl/software/byacc/bin/mv...not found Checking for program /home/wangzl/software/m4/bin/mv...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mv...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mv...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mv...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mv...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mv...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mv...not found Checking for program /home/wangzl/miniconda3/bin/mv...not found Checking for program /home/wangzl/projects/moose/python/peacock/mv...not found Checking for program /home/wangzl/software/fftwmpi/bin/mv...not found Checking for program /home/wangzl/software/byacc/bin/mv...not found Checking for program /home/wangzl/software/m4/bin/mv...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mv...not found Checking for program /home/wangzl/bin/mv...not found Checking for program /usr/local/bin/mv...not found Checking for program /usr/bin/mv...found Defined make macro "MV" to "/usr/bin/mv" Checking for program /home/wangzl/miniconda3/bin/cp...not found Checking for program /home/wangzl/projects/moose/python/peacock/cp...not found Checking for program /home/wangzl/software/fftwmpi/bin/cp...not found Checking for program /home/wangzl/software/byacc/bin/cp...not found Checking for program /home/wangzl/software/m4/bin/cp...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/cp...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/cp...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/cp...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/cp...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/cp...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/cp...not found Checking for program /home/wangzl/miniconda3/bin/cp...not found Checking for program /home/wangzl/projects/moose/python/peacock/cp...not found Checking for program /home/wangzl/software/fftwmpi/bin/cp...not found Checking for program /home/wangzl/software/byacc/bin/cp...not found Checking for program /home/wangzl/software/m4/bin/cp...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/cp...not found Checking for program /home/wangzl/bin/cp...not found Checking for program /usr/local/bin/cp...not found Checking for program /usr/bin/cp...found Defined make macro "CP" to "/usr/bin/cp" Checking for program /home/wangzl/miniconda3/bin/grep...not found Checking for program /home/wangzl/projects/moose/python/peacock/grep...not found Checking for program /home/wangzl/software/fftwmpi/bin/grep...not found Checking for program /home/wangzl/software/byacc/bin/grep...not found Checking for program /home/wangzl/software/m4/bin/grep...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/grep...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/grep...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/grep...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/grep...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/grep...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/grep...not found Checking for program /home/wangzl/miniconda3/bin/grep...not found Checking for program /home/wangzl/projects/moose/python/peacock/grep...not found Checking for program /home/wangzl/software/fftwmpi/bin/grep...not found Checking for program /home/wangzl/software/byacc/bin/grep...not found Checking for program /home/wangzl/software/m4/bin/grep...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/grep...not found Checking for program /home/wangzl/bin/grep...not found Checking for program /usr/local/bin/grep...not found Checking for program /usr/bin/grep...found Defined make macro "GREP" to "/usr/bin/grep" Checking for program /home/wangzl/miniconda3/bin/rm...not found Checking for program /home/wangzl/projects/moose/python/peacock/rm...not found Checking for program /home/wangzl/software/fftwmpi/bin/rm...not found Checking for program /home/wangzl/software/byacc/bin/rm...not found Checking for program /home/wangzl/software/m4/bin/rm...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/rm...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/rm...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/rm...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/rm...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/rm...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/rm...not found Checking for program /home/wangzl/miniconda3/bin/rm...not found Checking for program /home/wangzl/projects/moose/python/peacock/rm...not found Checking for program /home/wangzl/software/fftwmpi/bin/rm...not found Checking for program /home/wangzl/software/byacc/bin/rm...not found Checking for program /home/wangzl/software/m4/bin/rm...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/rm...not found Checking for program /home/wangzl/bin/rm...not found Checking for program /usr/local/bin/rm...not found Checking for program /usr/bin/rm...found Defined make macro "RM" to "/usr/bin/rm -f" Checking for program /home/wangzl/miniconda3/bin/diff...not found Checking for program /home/wangzl/projects/moose/python/peacock/diff...not found Checking for program /home/wangzl/software/fftwmpi/bin/diff...not found Checking for program /home/wangzl/software/byacc/bin/diff...not found Checking for program /home/wangzl/software/m4/bin/diff...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/diff...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/diff...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/diff...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/diff...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/diff...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/diff...not found Checking for program /home/wangzl/miniconda3/bin/diff...not found Checking for program /home/wangzl/projects/moose/python/peacock/diff...not found Checking for program /home/wangzl/software/fftwmpi/bin/diff...not found Checking for program /home/wangzl/software/byacc/bin/diff...not found Checking for program /home/wangzl/software/m4/bin/diff...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/diff...not found Checking for program /home/wangzl/bin/diff...not found Checking for program /usr/local/bin/diff...not found Checking for program /usr/bin/diff...found Executing: "/usr/bin/diff" -w "/tmp/petsc-wjcu960y/config.programs/diff1" "/tmp/petsc-wjcu960y/config.programs/diff2" Defined make macro "DIFF" to "/usr/bin/diff -w" Checking for program /usr/ucb/ps...not found Checking for program /usr/usb/ps...not found Checking for program /tmp/stack_temp.rFVgkc/petsc-3.11.4/lib/petsc/bin/win32fe/ps...not found Checking for program /home/wangzl/miniconda3/bin/gzip...not found Checking for program /home/wangzl/projects/moose/python/peacock/gzip...not found Checking for program /home/wangzl/software/fftwmpi/bin/gzip...not found Checking for program /home/wangzl/software/byacc/bin/gzip...not found Checking for program /home/wangzl/software/m4/bin/gzip...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/gzip...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/gzip...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/gzip...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/gzip...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/gzip...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/gzip...not found Checking for program /home/wangzl/miniconda3/bin/gzip...not found Checking for program /home/wangzl/projects/moose/python/peacock/gzip...not found Checking for program /home/wangzl/software/fftwmpi/bin/gzip...not found Checking for program /home/wangzl/software/byacc/bin/gzip...not found Checking for program /home/wangzl/software/m4/bin/gzip...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/gzip...not found Checking for program /home/wangzl/bin/gzip...not found Checking for program /usr/local/bin/gzip...not found Checking for program /usr/bin/gzip...found Defined make macro "GZIP" to "/usr/bin/gzip" Defined "HAVE_GZIP" to "1" Defined make macro "PYTHON" to "/home/wangzl/miniconda3/bin/python" Checking for program /home/wangzl/miniconda3/bin/m4...not found Checking for program /home/wangzl/projects/moose/python/peacock/m4...not found Checking for program /home/wangzl/software/fftwmpi/bin/m4...not found Checking for program /home/wangzl/software/byacc/bin/m4...not found Checking for program /home/wangzl/software/m4/bin/m4...found Defined make macro "M4" to "/home/wangzl/software/m4/bin/m4" ================================================================================ TEST configureMake from config.packages.make(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/make.py:85) TESTING: configureMake from config.packages.make(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/make.py:85) Check Guesses for GNU make Executing: gmake --version stdout: GNU Make 4.2.1 Built for x86_64-suse-linux-gnu Copyright (C) 1988-2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Checking for program /home/wangzl/miniconda3/bin/gmake...not found Checking for program /home/wangzl/projects/moose/python/peacock/gmake...not found Checking for program /home/wangzl/software/fftwmpi/bin/gmake...not found Checking for program /home/wangzl/software/byacc/bin/gmake...not found Checking for program /home/wangzl/software/m4/bin/gmake...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/gmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/gmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/gmake...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/gmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/gmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/gmake...not found Checking for program /home/wangzl/miniconda3/bin/gmake...not found Checking for program /home/wangzl/projects/moose/python/peacock/gmake...not found Checking for program /home/wangzl/software/fftwmpi/bin/gmake...not found Checking for program /home/wangzl/software/byacc/bin/gmake...not found Checking for program /home/wangzl/software/m4/bin/gmake...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/gmake...not found Checking for program /home/wangzl/bin/gmake...not found Checking for program /usr/local/bin/gmake...not found Checking for program /usr/bin/gmake...found Defined make macro "MAKE" to "/usr/bin/gmake" ================================================================================ TEST setupGNUMake from config.packages.make(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/make.py:113) TESTING: setupGNUMake from config.packages.make(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/make.py:113) Setup other GNU make stuff Executing: uname -s stdout: Linux Executing: uname -s stdout: Linux Defined make macro "MAKE_IS_GNUMAKE" to "1" Defined make rule "libc" with dependencies "${LIBNAME}(${OBJSC})" and code [] Defined make rule "libcxx" with dependencies "${LIBNAME}(${OBJSCXX})" and code [] Defined make rule "libcu" with dependencies "${LIBNAME}(${OBJSCU})" and code [] Defined make rule "libf" with dependencies "${OBJSF}" and code -${AR} ${AR_FLAGS} ${LIBNAME} ${OBJSF} Defined make macro "OMAKE_PRINTDIR " to "/usr/bin/gmake --print-directory" Defined make macro "OMAKE" to "/usr/bin/gmake --no-print-directory" Defined make macro "MAKE_PAR_OUT_FLG" to "--output-sync=recurse" ================================================================================ TEST configureMakeNP from config.packages.make(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/make.py:161) TESTING: configureMakeNP from config.packages.make(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/make.py:161) check no of cores on the build machine [perhaps to do make '-j ncores'] module multiprocessing found 20 cores: using make_np = 16 Defined make macro "MAKE_NP" to "16" Defined make macro "MAKE_TEST_NP" to "10" Defined make macro "MAKE_LOAD" to "30.0" Defined make macro "NPMAX" to "20" ================================================================================ TEST alternateConfigureLibrary from config.packages.OpenMPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.OpenMPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default Executing: uname -s stdout: Linux Executing: uname -s stdout: Linux ================================================================================ TEST alternateConfigureLibrary from config.packages.MPICH(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.MPICH(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST checkDependencies from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:523) TESTING: configureLibrary from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:523) Calls the regular package configureLibrary and then does an additional test needed by MPI ================================================================================== Checking for a functional MPI Checking for library in Compiler specific search MPI: [] ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [MPI_Init MPI_Comm_create] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_Init(); static void _check_MPI_Init() { MPI_Init(); } char MPI_Comm_create(); static void _check_MPI_Comm_create() { MPI_Comm_create(); } int main() { _check_MPI_Init(); _check_MPI_Comm_create();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl No functions to check for in library [] [] Checking for headers Compiler specific search MPI: ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['mpi.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.headers -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Found header files ['mpi.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST configureConversion from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:251) TESTING: configureConversion from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:251) Check for the functions which convert communicators between C and Fortran - Define HAVE_MPI_COMM_F2C and HAVE_MPI_COMM_C2F if they are present - Some older MPI 1 implementations are missing these All intermediate test results are stored in /tmp/petsc-wjcu960y/config.packages.MPI Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Comm_f2c((MPI_Fint)0)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_COMM_F2C" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Comm_c2f(MPI_COMM_WORLD)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_COMM_C2F" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:6:10: warning: unused variable ???a??? [-Wunused-variable] MPI_Fint a; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Fint a; ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_FINT" to "1" ================================================================================ TEST checkMPICHorOpenMPI from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:460) TESTING: checkMPICHorOpenMPI from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:460) Determine if MPICH_NUMVERSION or OMPI_MAJOR_VERSION exist in mpi.h Used for consistency checking of MPI installation at compile time Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: In file included from /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:3:0: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/mpi.h:595:26: warning: overflow in implicit constant conversion [-Woverflow] #define I_MPI_NUMVERSION 20190005300 ^ /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:4:17: note: in expansion of macro ???I_MPI_NUMVERSION??? int mpich_ver = I_MPI_NUMVERSION; ^~~~~~~~~~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int mpich_ver = I_MPI_NUMVERSION; int main() { ; return 0; } Source: #include "confdefs.h" #include "conffix.h" #include int mpich_ver = I_MPI_NUMVERSION; Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Defined "HAVE_I_MPI_NUMVERSION" to "20190005300" ================================================================================ TEST configureMPI2 from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:184) TESTING: configureMPI2 from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:184) Check for functions added to the interface in MPI-2 Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int flag;if (MPI_Finalized(&flag)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_FINALIZED" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Allreduce(MPI_IN_PLACE,0, 1, MPI_INT, MPI_SUM, MPI_COMM_SELF)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_IN_PLACE" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int count=2; int blocklens[2]={0,1}; MPI_Aint indices[2]={0,1}; MPI_Datatype old_types[2]={0,1}; MPI_Datatype *newtype = 0; if (MPI_Type_create_struct(count, blocklens, indices, old_types, newtype)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Comm_errhandler_fn * p_err_fun = 0; MPI_Errhandler * p_errhandler = 0; if (MPI_Comm_create_errhandler(p_err_fun,p_errhandler)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Comm_set_errhandler(MPI_COMM_WORLD,MPI_ERRORS_RETURN)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Reduce_local(0, 0, 0, MPI_INT, MPI_SUM));; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_REDUCE_LOCAL" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { char version[MPI_MAX_LIBRARY_VERSION_STRING];int verlen;if (MPI_Get_library_version(version,&verlen)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_GET_LIBRARY_VERSION" to "1" ================================================================================ TEST configureMPI3 from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:223) TESTING: configureMPI3 from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:223) Check for functions added to the interface in MPI-3 Checking for functions [MPI_Win_create] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_Win_create(); static void _check_MPI_Win_create() { MPI_Win_create(); } int main() { _check_MPI_Win_create();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_WIN_CREATE" to "1" Defined "HAVE_MPI_REPLACE" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Comm scomm; if (MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &scomm)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_SHARED_COMM" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Win win; if (MPI_Win_allocate_shared(100,10,MPI_INFO_NULL,MPI_COMM_WORLD, 0, &win)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_WIN_ALLOCATE_SHARED" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Win_shared_query(MPI_WIN_NULL,0,0,0,0)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_WIN_SHARED_QUERY" to "1" Defined "HAVE_MPI_WIN_CREATE_FEATURE" to "1" Defined "HAVE_MPI_PROCESS_SHARED_MEMORY" to "1" ================================================================================ TEST configureTypes from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:271) TESTING: configureTypes from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:271) Checking for MPI types Checking for size of type: MPI_Comm Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif #define MPICH_IGNORE_CXX_SEEK #define MPICH_SKIP_MPICXX 1 #define OMPI_SKIP_MPICXX 1 #include int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(MPI_Comm)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_MPI_COMM" to "4" Checking for size of type: MPI_Fint Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.types/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif #define MPICH_IGNORE_CXX_SEEK #define MPICH_SKIP_MPICXX 1 #define OMPI_SKIP_MPICXX 1 #include int main() { FILE *f = fopen("conftestval", "w"); if (!f) exit(1); fprintf(f, "%lu\n", (unsigned long)sizeof(MPI_Fint)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.types/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.types/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.types/conftest Executing: /tmp/petsc-wjcu960y/config.types/conftest Defined "SIZEOF_MPI_FINT" to "4" ================================================================================ TEST configureMPITypes from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:283) TESTING: configureMPITypes from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:283) Checking for MPI Datatype handles Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_STDLIB_H #include #endif #include int main() { MPI_Aint size; int ierr; MPI_Init(0,0); ierr = MPI_Type_extent(MPI_LONG_DOUBLE, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_STDLIB_H #include #endif #include int main() { MPI_Aint size; int ierr; MPI_Init(0,0); ierr = MPI_Type_extent(MPI_LONG_DOUBLE, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.packages.MPI/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.packages.MPI/conftest Executing: /tmp/petsc-wjcu960y/config.packages.MPI/conftest Defined "HAVE_MPI_LONG_DOUBLE" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_STDLIB_H #include #endif #include int main() { MPI_Aint size; int ierr; MPI_Init(0,0); ierr = MPI_Type_extent(MPI_INT64_T, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_STDLIB_H #include #endif #include int main() { MPI_Aint size; int ierr; MPI_Init(0,0); ierr = MPI_Type_extent(MPI_INT64_T, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.packages.MPI/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.packages.MPI/conftest Executing: /tmp/petsc-wjcu960y/config.packages.MPI/conftest Defined "HAVE_MPI_INT64_T" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_STDLIB_H #include #endif #include int main() { MPI_Aint size; int ierr; MPI_Init(0,0); ierr = MPI_Type_extent(MPI_C_DOUBLE_COMPLEX, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #ifdef PETSC_HAVE_STDLIB_H #include #endif #include int main() { MPI_Aint size; int ierr; MPI_Init(0,0); ierr = MPI_Type_extent(MPI_C_DOUBLE_COMPLEX, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.packages.MPI/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.packages.MPI/conftest Executing: /tmp/petsc-wjcu960y/config.packages.MPI/conftest Defined "HAVE_MPI_C_DOUBLE_COMPLEX" to "1" ================================================================================ TEST configureMissingPrototypes from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:351) TESTING: configureMissingPrototypes from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:351) Checks for missing prototypes, which it adds to petscfix.h ================================================================================ TEST SGIMPICheck from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:370) TESTING: SGIMPICheck from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:370) Returns true if SGI MPI is used Checking for functions [MPI_SGI_barrier] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_SGI_barrier(); static void _check_MPI_SGI_barrier() { MPI_SGI_barrier(); } int main() { _check_MPI_SGI_barrier();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_MPI_SGI_barrier': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `MPI_SGI_barrier' collect2: error: ld returned 1 exit status SGI MPI test failure ================================================================================ TEST CxxMPICheck from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:380) TESTING: CxxMPICheck from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:380) Make sure C++ can compile and link Checking for header mpi.h Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/config.libraries -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.libraries/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { ; return 0; } Checking for C++ MPI_Finalize() Checking for functions [MPI_Finalize] in library [] [] Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/config.libraries -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /tmp/petsc-wjcu960y/config.libraries/conftest.cc Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.libraries/conftest.cc: In function ???void _check_MPI_Finalize()???: /tmp/petsc-wjcu960y/config.libraries/conftest.cc:5:41: warning: variable ???ierr??? set but not used [-Wunused-but-set-variable] static void _check_MPI_Finalize() { int ierr; ^~~~ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_MPI_Finalize() { int ierr; ierr = MPI_Finalize();; } int main() { _check_MPI_Finalize();; return 0; } Executing: mpicxx -o /tmp/petsc-wjcu960y/config.libraries/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl ================================================================================ TEST FortranMPICheck from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:398) TESTING: FortranMPICheck from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:398) Make sure fortran include [mpif.h] and library symbols are found Checking for header mpif.h Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.F90 Successful compile: Source: program main #include "mpif.h" end Checking for fortran mpi_init() Checking for functions [] in library [] [] Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.F90 Successful compile: Source: program main #include "mpif.h" integer ierr call mpi_init(ierr) end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Checking for mpi.mod Checking for functions [] in library [] [] Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.F90 Successful compile: Source: program main use mpi integer ierr,rank call mpi_init(ierr) call mpi_comm_rank(MPI_COMM_WORLD,rank,ierr) end Executing: mpif90 -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_F90MODULE" to "1" ================================================================================ TEST configureIO from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:423) TESTING: configureIO from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:423) Check for the functions in MPI/IO - Define HAVE_MPIIO if they are present - Some older MPI 1 implementations are missing these Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Aint lb, extent; if (MPI_Type_get_extent(MPI_INT, &lb, &extent)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:9:5: warning: ???fh??? is used uninitialized in this function [-Wuninitialized] if (MPI_File_write_all(fh, buf, 1, MPI_INT, &status)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:9:5: warning: ???buf??? is used uninitialized in this function [-Wuninitialized] Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_File fh; void *buf; MPI_Status status; if (MPI_File_write_all(fh, buf, 1, MPI_INT, &status)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:9:5: warning: ???fh??? is used uninitialized in this function [-Wuninitialized] if (MPI_File_read_all(fh, buf, 1, MPI_INT, &status)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:9:5: warning: ???buf??? is used uninitialized in this function [-Wuninitialized] Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_File fh; void *buf; MPI_Status status; if (MPI_File_read_all(fh, buf, 1, MPI_INT, &status)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:9:5: warning: ???fh??? is used uninitialized in this function [-Wuninitialized] if (MPI_File_set_view(fh, disp, MPI_INT, MPI_INT, "", info)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:9:5: warning: ???disp??? is used uninitialized in this function [-Wuninitialized] /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:9:5: warning: ???info??? is used uninitialized in this function [-Wuninitialized] Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_File fh; MPI_Offset disp; MPI_Info info; if (MPI_File_set_view(fh, disp, MPI_INT, MPI_INT, "", info)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:8:5: warning: ???info??? is used uninitialized in this function [-Wuninitialized] if (MPI_File_open(MPI_COMM_SELF, "", 0, info, &fh)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_File fh; MPI_Info info; if (MPI_File_open(MPI_COMM_SELF, "", 0, info, &fh)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:7:10: warning: unused variable ???info??? [-Wunused-variable] MPI_Info info; ^~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_File fh; MPI_Info info; if (MPI_File_close(&fh)); ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Defined "HAVE_MPIIO" to "1" ================================================================================ TEST findMPIInc from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:497) TESTING: findMPIInc from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:497) Find MPI include paths from "mpicc -show" and use with CUDAC_FLAGS Checking for functions [MPI_Alltoallw] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_Alltoallw(); static void _check_MPI_Alltoallw() { MPI_Alltoallw(); } int main() { _check_MPI_Alltoallw();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Checking for functions [MPI_Type_create_indexed_block] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_Type_create_indexed_block(); static void _check_MPI_Type_create_indexed_block() { MPI_Type_create_indexed_block(); } int main() { _check_MPI_Type_create_indexed_block();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_ALLTOALLW" to "1" Checking for functions [MPI_Comm_spawn MPI_Type_get_envelope MPI_Type_get_extent MPI_Type_dup MPI_Init_thread MPI_Iallreduce MPI_Ibarrier MPI_Finalized MPI_Exscan MPI_Reduce_scatter MPI_Reduce_scatter_block] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_Comm_spawn(); static void _check_MPI_Comm_spawn() { MPI_Comm_spawn(); } char MPI_Type_get_envelope(); static void _check_MPI_Type_get_envelope() { MPI_Type_get_envelope(); } char MPI_Type_get_extent(); static void _check_MPI_Type_get_extent() { MPI_Type_get_extent(); } char MPI_Type_dup(); static void _check_MPI_Type_dup() { MPI_Type_dup(); } char MPI_Init_thread(); static void _check_MPI_Init_thread() { MPI_Init_thread(); } char MPI_Iallreduce(); static void _check_MPI_Iallreduce() { MPI_Iallreduce(); } char MPI_Ibarrier(); static void _check_MPI_Ibarrier() { MPI_Ibarrier(); } char MPI_Finalized(); static void _check_MPI_Finalized() { MPI_Finalized(); } char MPI_Exscan(); static void _check_MPI_Exscan() { MPI_Exscan(); } char MPI_Reduce_scatter(); static void _check_MPI_Reduce_scatter() { MPI_Reduce_scatter(); } char MPI_Reduce_scatter_block(); static void _check_MPI_Reduce_scatter_block() { MPI_Reduce_scatter_block(); } int main() { _check_MPI_Comm_spawn(); _check_MPI_Type_get_envelope(); _check_MPI_Type_get_extent(); _check_MPI_Type_dup(); _check_MPI_Init_thread(); _check_MPI_Iallreduce(); _check_MPI_Ibarrier(); _check_MPI_Finalized(); _check_MPI_Exscan(); _check_MPI_Reduce_scatter(); _check_MPI_Reduce_scatter_block();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Defined "HAVE_MPI_COMM_SPAWN" to "1" Defined "HAVE_MPI_TYPE_GET_ENVELOPE" to "1" Defined "HAVE_MPI_TYPE_GET_EXTENT" to "1" Defined "HAVE_MPI_TYPE_DUP" to "1" Defined "HAVE_MPI_INIT_THREAD" to "1" Defined "HAVE_MPI_IALLREDUCE" to "1" Defined "HAVE_MPI_IBARRIER" to "1" Defined "HAVE_MPI_FINALIZED" to "1" Defined "HAVE_MPI_EXSCAN" to "1" Defined "HAVE_MPI_REDUCE_SCATTER" to "1" Defined "HAVE_MPI_REDUCE_SCATTER_BLOCK" to "1" Checking for functions [MPIX_Iallreduce] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPIX_Iallreduce(); static void _check_MPIX_Iallreduce() { MPIX_Iallreduce(); } int main() { _check_MPIX_Iallreduce();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_MPIX_Iallreduce': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `MPIX_Iallreduce' collect2: error: ld returned 1 exit status Checking for functions [MPIX_Ibarrier] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPIX_Ibarrier(); static void _check_MPIX_Ibarrier() { MPIX_Ibarrier(); } int main() { _check_MPIX_Ibarrier();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_MPIX_Ibarrier': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `MPIX_Ibarrier' collect2: error: ld returned 1 exit status Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:6:5: warning: unused variable ???combiner??? [-Wunused-variable] int combiner = MPI_COMBINER_DUP;; ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int combiner = MPI_COMBINER_DUP;; return 0; } Defined "HAVE_MPI_COMBINER_DUP" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:6:5: warning: unused variable ???combiner??? [-Wunused-variable] int combiner = MPI_COMBINER_CONTIGUOUS;; ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int combiner = MPI_COMBINER_CONTIGUOUS;; return 0; } Defined "HAVE_MPI_COMBINER_CONTIGUOUS" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c:6:5: warning: unused variable ???combiner??? [-Wunused-variable] int combiner = MPI_COMBINER_NAMED;; ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int combiner = MPI_COMBINER_NAMED;; return 0; } Defined "HAVE_MPI_COMBINER_NAMED" to "1" Checking for functions [MPIDI_CH3I_sock_set] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPIDI_CH3I_sock_set(); static void _check_MPIDI_CH3I_sock_set() { MPIDI_CH3I_sock_set(); } int main() { _check_MPIDI_CH3I_sock_set();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_MPIDI_CH3I_sock_set': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `MPIDI_CH3I_sock_set' collect2: error: ld returned 1 exit status Checking for functions [MPIDI_CH3I_sock_fixed_nbc_progress] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPIDI_CH3I_sock_fixed_nbc_progress(); static void _check_MPIDI_CH3I_sock_fixed_nbc_progress() { MPIDI_CH3I_sock_fixed_nbc_progress(); } int main() { _check_MPIDI_CH3I_sock_fixed_nbc_progress();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_MPIDI_CH3I_sock_fixed_nbc_progress': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `MPIDI_CH3I_sock_fixed_nbc_progress' collect2: error: ld returned 1 exit status ================================================================================ TEST checkSharedLibrary from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:132) TESTING: checkSharedLibrary from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:132) Sets flag indicating if MPI libraries are shared or not and determines if MPI libraries CANNOT be used by shared libraries ================================================================================ TEST configureMPIEXEC from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:145) TESTING: configureMPIEXEC from config.packages.MPI(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/MPI.py:145) Checking for mpiexec Checking for program /home/wangzl/miniconda3/bin/mpiexec...not found Checking for program /home/wangzl/miniconda3/bin/mpirun...not found Checking for program /home/wangzl/miniconda3/bin/mprun...not found Checking for program /home/wangzl/miniconda3/bin/mpiexec...not found Checking for program /home/wangzl/miniconda3/bin/mpirun...not found Checking for program /home/wangzl/miniconda3/bin/mprun...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpiexec...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpirun...not found Checking for program /home/wangzl/projects/moose/python/peacock/mprun...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpiexec...not found Checking for program /home/wangzl/projects/moose/python/peacock/mpirun...not found Checking for program /home/wangzl/projects/moose/python/peacock/mprun...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpiexec...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpirun...not found Checking for program /home/wangzl/software/fftwmpi/bin/mprun...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpiexec...not found Checking for program /home/wangzl/software/fftwmpi/bin/mpirun...not found Checking for program /home/wangzl/software/fftwmpi/bin/mprun...not found Checking for program /home/wangzl/software/byacc/bin/mpiexec...not found Checking for program /home/wangzl/software/byacc/bin/mpirun...not found Checking for program /home/wangzl/software/byacc/bin/mprun...not found Checking for program /home/wangzl/software/byacc/bin/mpiexec...not found Checking for program /home/wangzl/software/byacc/bin/mpirun...not found Checking for program /home/wangzl/software/byacc/bin/mprun...not found Checking for program /home/wangzl/software/m4/bin/mpiexec...not found Checking for program /home/wangzl/software/m4/bin/mpirun...not found Checking for program /home/wangzl/software/m4/bin/mprun...not found Checking for program /home/wangzl/software/m4/bin/mpiexec...not found Checking for program /home/wangzl/software/m4/bin/mpirun...not found Checking for program /home/wangzl/software/m4/bin/mprun...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpiexec...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpirun...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mprun...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpiexec...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mpirun...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/mprun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpiexec...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpirun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mprun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpiexec...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mpirun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/mprun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpiexec...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpirun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mprun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpiexec...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mpirun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mprun...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpiexec...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpirun...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mprun...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpiexec...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mpirun...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/mprun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpiexec...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpirun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mprun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpiexec...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mpirun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/mprun...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpiexec...found Defined make macro "MPIEXEC" to "mpiexec" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #ifdef __cplusplus extern "C" #endif int init(int argc, char *argv[]) { int isInitialized; MPI_Init(&argc, &argv); MPI_Initialized(&isInitialized); return (int) isInitialized; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/libconftest.so -shared -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #ifdef __cplusplus extern "C" #endif int checkInit(void) { int isInitialized; MPI_Initialized(&isInitialized); if (isInitialized) MPI_Finalize(); return (int) isInitialized; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.MPI/libconftest.so -shared -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.MPI/conftest.o -lstdc++ -ldl Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #ifdef PETSC_HAVE_DLFCN_H #include #endif int main() { int argc = 1; char *argv[2] = {(char *) "conftest", NULL}; void *lib; int (*init)(int, char **); int (*checkInit)(void); lib = dlopen("/tmp/petsc-wjcu960y/config.libraries/lib1.so", RTLD_LAZY); if (!lib) { fprintf(stderr, "Could not open lib1.so: %s\n", dlerror()); exit(1); } init = (int (*)(int, char **)) dlsym(lib, "init"); if (!init) { fprintf(stderr, "Could not find initialization function\n"); exit(1); } if (!(*init)(argc, argv)) { fprintf(stderr, "Could not initialize library\n"); exit(1); } lib = dlopen("/tmp/petsc-wjcu960y/config.libraries/lib2.so", RTLD_LAZY); if (!lib) { fprintf(stderr, "Could not open lib2.so: %s\n", dlerror()); exit(1); } checkInit = (int (*)(void)) dlsym(lib, "checkInit"); if (!checkInit) { fprintf(stderr, "Could not find initialization check function\n"); exit(1); } if (!(*checkInit)()) { fprintf(stderr, "Did not link with shared library\n"); exit(2); } ; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl -ldl Testing executable /tmp/petsc-wjcu960y/config.libraries/conftest to see if it can be run Executing: mpiexec /tmp/petsc-wjcu960y/config.libraries/conftest Executing: mpiexec /tmp/petsc-wjcu960y/config.libraries/conftest ERROR while running executable: Could not execute "['mpiexec /tmp/petsc-wjcu960y/config.libraries/conftest']": Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Could not find initialization function Library was not shared ================================================================================ TEST alternateConfigureLibrary from config.packages.yaml(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.yaml(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST configureLibrary from config.packages.valgrind(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) TESTING: configureLibrary from config.packages.valgrind(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional valgrind Not checking for library in Compiler specific search VALGRIND: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] No functions to check for in library [] [] Checking for headers Compiler specific search VALGRIND: ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['valgrind/valgrind.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.headers -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~compilation terminated.: Not checking for library in Package specific search directory VALGRIND: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] No functions to check for in library [] [] Checking for headers Package specific search directory VALGRIND: ['/usr/local/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['valgrind/valgrind.h'] in ['/usr/local/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/usr/local/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.headers -I/usr/local/include -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~compilation terminated.: Not checking for library in Package specific search directory VALGRIND: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] No functions to check for in library [] [] Checking for headers Package specific search directory VALGRIND: ['/usr/local/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['valgrind/valgrind.h'] in ['/usr/local/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/usr/local/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.headers -I/usr/local/include -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conftest.c" # 1 "/tmp/petsc-wjcu960y/config.headers/confdefs.h" 1 # 2 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 # 1 "/tmp/petsc-wjcu960y/config.headers/conffix.h" 1 # 3 "/tmp/petsc-wjcu960y/config.headers/conftest.c" 2 stderr: /tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/tmp/petsc-wjcu960y/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~compilation terminated.: VALGRIND: SearchDir DirPath not found.. skipping: /opt/local Executing: uname -s stdout: Linux =============================================================================== It appears you do not have valgrind installed on your system. We HIGHLY recommend you install it from www.valgrind.org Or install valgrind-devel or equivalent using your package manager. Then rerun ./configure =============================================================================== ================================================================================ TEST alternateConfigureLibrary from config.packages.ssl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.ssl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.sprng(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.sprng(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default Not checking sowing on user request of --with-sowing=0 ================================================================================ TEST checkDependencies from config.packages.slepc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.slepc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.slepc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) TESTING: configureLibrary from config.packages.slepc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional slepc Looking for SLEPC at git.slepc, hg.slepc or a directory starting with ['slepc'] Could not locate an existing copy of SLEPC: [] Downloading slepc =============================================================================== Trying to download git:///home/wangzl/packages/slepc for SLEPC =============================================================================== Executing: git clone /home/wangzl/packages/slepc /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc Looking for SLEPC at git.slepc, hg.slepc or a directory starting with ['slepc'] Found a copy of SLEPC in git.slepc Executing: ['git', 'rev-parse', '--git-dir'] stdout: .git Executing: ['git', 'cat-file', '-e', 'v3.11^{commit}'] Executing: ['git', 'rev-parse', 'v3.11'] stdout: 02e3715d9a6a29d28a09436b62760459488011a8 Executing: ['git', '-c', 'user.name=petsc-configure', '-c', 'user.email=petsc at configure', 'stash'] stdout: No local changes to save Executing: ['git', 'clean', '-f', '-d', '-x'] Executing: ['git', 'checkout', '-f', '02e3715d9a6a29d28a09436b62760459488011a8'] Defined "HAVE_SLEPC" to "1" Defined make macro "SLEPC" to "yes" Defined make rule "slepcbuild" with dependencies "" and code ['@echo "*** Building slepc ***"', '@${RM} -f ${PETSC_ARCH}/lib/petsc/conf/slepc.errorflg', '@(cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc && \\\n SLEPC_DIR=/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc PETSC_DIR=/home/wangzl/moose-compilers/petsc-3.11.4 PETSC_ARCH="" ./configure --prefix=/home/wangzl/moose-compilers/petsc-3.11.4 && \\\n SLEPC_DIR=/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc PETSC_DIR=/home/wangzl/moose-compilers/petsc-3.11.4 PETSC_ARCH=installed-arch-linux-c-opt ${OMAKE} SLEPC_DIR=/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc PETSC_DIR=/home/wangzl/moose-compilers/petsc-3.11.4 PETSC_ARCH=installed-arch-linux-c-opt ) > ${PETSC_ARCH}/lib/petsc/conf/slepc.log 2>&1 || \\\n (echo "**************************ERROR*************************************" && \\\n echo "Error building slepc. Check ${PETSC_ARCH}/lib/petsc/conf/slepc.log" && \\\n echo "********************************************************************" && \\\n touch ${PETSC_ARCH}/lib/petsc/conf/slepc.errorflg && \\\n exit 1)'] Defined make rule "slepcinstall" with dependencies "" and code ['@echo "*** Installing slepc ***"', '@(cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc && \\\n SLEPC_DIR=/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc PETSC_DIR=/home/wangzl/moose-compilers/petsc-3.11.4 PETSC_ARCH=installed-arch-linux-c-opt ${OMAKE} install SLEPC_DIR=/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/git.slepc PETSC_DIR=/home/wangzl/moose-compilers/petsc-3.11.4 PETSC_ARCH=installed-arch-linux-c-opt ) >> ${PETSC_ARCH}/lib/petsc/conf/slepc.log 2>&1 || \\\n (echo "**************************ERROR*************************************" && \\\n echo "Error building slepc. Check ${PETSC_ARCH}/lib/petsc/conf/slepc.log" && \\\n echo "********************************************************************" && \\\n exit 1)'] Defined make rule "slepc-build" with dependencies "" and code [] Defined make rule "slepc-install" with dependencies "slepcbuild slepcinstall" and code [] =============================================================================== Slepc examples are available at ${PETSC_DIR}/linux-opt/externalpackages/git.slepc export SLEPC_DIR=/home/wangzl/moose-compilers/petsc-3.11.4 =============================================================================== ================================================================================ TEST checkSharedLibrary from config.packages.slepc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.slepc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST alternateConfigureLibrary from config.packages.revolve(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.revolve(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.radau5(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.radau5(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.pami(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.pami(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.opengles(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.opengles(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.opencl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.opencl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.muparser(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.muparser(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default Defined "PYTHON_EXE" to ""/home/wangzl/miniconda3/bin/python"" Executing: /home/wangzl/miniconda3/bin/python -c "import Cython" Executing: /home/wangzl/miniconda3/bin/python -c "import numpy" ================================================================================ TEST alternateConfigureLibrary from config.packages.petsc4py(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/petsc4py.py:96) TESTING: alternateConfigureLibrary from config.packages.petsc4py(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/petsc4py.py:96) Defined make rule "petsc4py-build" with dependencies "" and code [] Defined make rule "petsc4py-install" with dependencies "" and code [] ================================================================================ TEST alternateConfigureLibrary from config.packages.mpi4py(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/mpi4py.py:73) TESTING: alternateConfigureLibrary from config.packages.mpi4py(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/mpi4py.py:73) Defined make rule "mpi4py-build" with dependencies "" and code [] Defined make rule "mpi4py-install" with dependencies "" and code [] ================================================================================ TEST alternateConfigureLibrary from config.packages.mpe(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.mpe(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.memkind(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.memkind(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.libmesh(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/libmesh.py:76) TESTING: alternateConfigureLibrary from config.packages.libmesh(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/libmesh.py:76) Defined make rule "libmesh-build" with dependencies "" and code [] Defined make rule "libmesh-install" with dependencies "" and code [] ================================================================================ TEST alternateConfigureLibrary from config.packages.moose(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.moose(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.libjpeg(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.libjpeg(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.libceed(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.libceed(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default Not a clone of PETSc, don't need Lgrind ================================================================================ TEST alternateConfigureLibrary from config.packages.gmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.gmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.mpfr(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.mpfr(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.giflib(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.giflib(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.cuda(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.cuda(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.ctetgen(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.ctetgen(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.concurrencykit(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.concurrencykit(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST locateC2html from config.packages.c2html(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/c2html.py:32) TESTING: locateC2html from config.packages.c2html(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/c2html.py:32) Looking for default C2html executable Checking for program /home/wangzl/miniconda3/bin/c2html...not found Checking for program /home/wangzl/projects/moose/python/peacock/c2html...not found Checking for program /home/wangzl/software/fftwmpi/bin/c2html...not found Checking for program /home/wangzl/software/byacc/bin/c2html...not found Checking for program /home/wangzl/software/m4/bin/c2html...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/c2html...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/c2html...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/c2html...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/c2html...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/c2html...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/c2html...not found Checking for program /home/wangzl/miniconda3/bin/c2html...not found Checking for program /home/wangzl/projects/moose/python/peacock/c2html...not found Checking for program /home/wangzl/software/fftwmpi/bin/c2html...not found Checking for program /home/wangzl/software/byacc/bin/c2html...not found Checking for program /home/wangzl/software/m4/bin/c2html...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/c2html...not found Checking for program /home/wangzl/bin/c2html...not found Checking for program /usr/local/bin/c2html...not found Checking for program /usr/bin/c2html...not found Checking for program /bin/c2html...not found Checking for program /opt/pbs/bin/c2html...not found Checking for program /home/apps/c2html...not found Checking for program /tmp/stack_temp.rFVgkc/petsc-3.11.4/lib/petsc/bin/win32fe/c2html...not found ================================================================================ TEST alternateConfigureLibrary from config.packages.boost(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.boost(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.silo(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.silo(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.Random123(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.Random123(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.PARTY(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.PARTY(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.Matlab(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.Matlab(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.MatlabEngine(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.MatlabEngine(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.Mathematica(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.Mathematica(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.hwloc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.hwloc(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST checkDependencies from config.packages.pthread(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.pthread(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.pthread(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/pthread.py:19) TESTING: configureLibrary from config.packages.pthread(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/pthread.py:19) Checks for pthread_barrier_t, cpu_set_t, and sys/sysctl.h ================================================================================== Checking for a functional pthread Checking for library in Compiler specific search PTHREAD: [] ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [pthread_create] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char pthread_create(); static void _check_pthread_create() { pthread_create(); } int main() { _check_pthread_create();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl No functions to check for in library [] [] Checking for headers Compiler specific search PTHREAD: ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['pthread.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.headers -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Found header files ['pthread.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] All intermediate test results are stored in /tmp/petsc-wjcu960y/config.packages.pthread Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.pthread/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.pthread/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.pthread/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.pthread/conftest.c:6:20: warning: unused variable ???a??? [-Wunused-variable] pthread_barrier_t *a; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { pthread_barrier_t *a; ; return 0; } Defined "HAVE_PTHREAD_BARRIER_T" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.pthread/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.pthread -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /tmp/petsc-wjcu960y/config.packages.pthread/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.packages.pthread/conftest.c: In function ???main???: /tmp/petsc-wjcu960y/config.packages.pthread/conftest.c:6:12: warning: unused variable ???a??? [-Wunused-variable] cpu_set_t *a; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { cpu_set_t *a; ; return 0; } Defined "HAVE_SCHED_CPU_SET_T" to "1" Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.headers /tmp/petsc-wjcu960y/config.packages.pthread/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SYSCTL_H" to "1" ================================================================================ TEST checkSharedLibrary from config.packages.pthread(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.pthread(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST checkDependencies from config.packages.openmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.openmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.openmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/openmp.py:19) TESTING: configureLibrary from config.packages.openmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/openmp.py:19) Checks for -fopenmp compiler flag Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -fopenmp Executing: mpif90 -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -fopenmp /tmp/petsc-wjcu960y/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -fopenmp Executing: mpicxx -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/config.libraries -I/tmp/petsc-wjcu960y/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp -fPIC /tmp/petsc-wjcu960y/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -fopenmp ================================================================================== Checking for a functional openmp Not checking for library in Compiler specific search OPENMP: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] No functions to check for in library [] [] Checking for headers Compiler specific search OPENMP: ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['omp.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.headers -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Found header files ['omp.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkSharedLibrary from config.packages.openmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.openmp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST alternateConfigureLibrary from config.packages.viennacl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.viennacl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.opengl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.opengl(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.glut(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.glut(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST checkDependencies from config.packages.X(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.X(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.X(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) TESTING: configureLibrary from config.packages.X(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional X Checking for library in Compiler specific search X: [] ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [XSetWMName] in library [] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char XSetWMName(); static void _check_XSetWMName() { XSetWMName(); } int main() { _check_XSetWMName();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_XSetWMName': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `XSetWMName' collect2: error: ld returned 1 exit status Checking for library in Compiler specific search X: ['libX11.a'] ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [XSetWMName] in library ['libX11.a'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char XSetWMName(); static void _check_XSetWMName() { XSetWMName(); } int main() { _check_XSetWMName();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -lX11 -lstdc++ -ldl Defined "HAVE_LIBX11" to "1" No functions to check for in library ['libX11.a'] [] Checking for headers Compiler specific search X: ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['X11/Xlib.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.headers -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Found header files ['X11/Xlib.h'] in ['/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkSharedLibrary from config.packages.X(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.X(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST alternateConfigureLibrary from config.packages.GLVis(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.GLVis(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.ColPack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.ColPack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.CoDiPack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.CoDiPack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.adblaslapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.adblaslapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.ADOLC(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.ADOLC(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.szlib(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.szlib(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.zlib(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.zlib(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST locateCMake from config.packages.cmake(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/cmake.py:36) TESTING: locateCMake from config.packages.cmake(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/cmake.py:36) Looking for default CMake executable Checking for program /home/wangzl/miniconda3/bin/cmake...not found Checking for program /home/wangzl/projects/moose/python/peacock/cmake...not found Checking for program /home/wangzl/software/fftwmpi/bin/cmake...not found Checking for program /home/wangzl/software/byacc/bin/cmake...not found Checking for program /home/wangzl/software/m4/bin/cmake...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/cmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/cmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/cmake...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/cmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/cmake...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/cmake...not found Checking for program /home/wangzl/miniconda3/bin/cmake...not found Checking for program /home/wangzl/projects/moose/python/peacock/cmake...not found Checking for program /home/wangzl/software/fftwmpi/bin/cmake...not found Checking for program /home/wangzl/software/byacc/bin/cmake...not found Checking for program /home/wangzl/software/m4/bin/cmake...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/cmake...not found Checking for program /home/wangzl/bin/cmake...not found Checking for program /usr/local/bin/cmake...not found Checking for program /usr/bin/cmake...found Defined make macro "CMAKE" to "/usr/bin/cmake" Looking for default CTest executable Checking for program /home/wangzl/miniconda3/bin/ctest...not found Checking for program /home/wangzl/projects/moose/python/peacock/ctest...not found Checking for program /home/wangzl/software/fftwmpi/bin/ctest...not found Checking for program /home/wangzl/software/byacc/bin/ctest...not found Checking for program /home/wangzl/software/m4/bin/ctest...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ctest...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/ctest...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/ctest...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/ctest...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/ctest...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/ctest...not found Checking for program /home/wangzl/miniconda3/bin/ctest...not found Checking for program /home/wangzl/projects/moose/python/peacock/ctest...not found Checking for program /home/wangzl/software/fftwmpi/bin/ctest...not found Checking for program /home/wangzl/software/byacc/bin/ctest...not found Checking for program /home/wangzl/software/m4/bin/ctest...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/ctest...not found Checking for program /home/wangzl/bin/ctest...not found Checking for program /usr/local/bin/ctest...not found Checking for program /usr/bin/ctest...found Defined make macro "CTEST" to "/usr/bin/ctest" ================================================================================ TEST alternateConfigureLibrary from config.packages.googletest(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.googletest(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.unittestcpp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.unittestcpp(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.eigen(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.eigen(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.tetgen(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.tetgen(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.tchem(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.tchem(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.saws(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.saws(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.libpng(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.libpng(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.combblas(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.combblas(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.Triangle(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.Triangle(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST checkDependencies from config.packages.PTScotch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.PTScotch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.PTScotch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) TESTING: configureLibrary from config.packages.PTScotch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional PTScotch Looking for PTSCOTCH at git.ptscotch, hg.ptscotch or a directory starting with ['scotch', 'petsc-pkg-scotch'] Could not locate an existing copy of PTSCOTCH: ['git.slepc'] Downloading PTScotch =============================================================================== Trying to download file:///home/wangzl/packages/petsc-pkg-scotch-c15036faac5f.tar.gz for PTSCOTCH =============================================================================== Downloading file:///home/wangzl/packages/petsc-pkg-scotch-c15036faac5f.tar.gz to /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/_d_petsc-pkg-scotch-c15036faac5f.tar.gz Extracting /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/_d_petsc-pkg-scotch-c15036faac5f.tar.gz Executing: cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages; chmod -R a+r petsc-pkg-scotch-c15036faac5f;find petsc-pkg-scotch-c15036faac5f -type d -name "*" -exec chmod a+rx {} \; Looking for PTSCOTCH at git.ptscotch, hg.ptscotch or a directory starting with ['scotch', 'petsc-pkg-scotch'] Found a copy of PTSCOTCH in petsc-pkg-scotch-c15036faac5f Creating PTScotch /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/Makefile.inc Checking for program /home/wangzl/miniconda3/bin/bison...not found Checking for program /home/wangzl/projects/moose/python/peacock/bison...not found Checking for program /home/wangzl/software/fftwmpi/bin/bison...not found Checking for program /home/wangzl/software/byacc/bin/bison...not found Checking for program /home/wangzl/software/m4/bin/bison...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/bison...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/bison...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/bison...not found Checking for program /opt/intel/debugger_2019/gdb/intel64/bin/bison...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/bin/bison...not found Checking for program /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/bison...not found Checking for program /home/wangzl/miniconda3/bin/bison...not found Checking for program /home/wangzl/projects/moose/python/peacock/bison...not found Checking for program /home/wangzl/software/fftwmpi/bin/bison...not found Checking for program /home/wangzl/software/byacc/bin/bison...not found Checking for program /home/wangzl/software/m4/bin/bison...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/bison...not found Checking for program /home/wangzl/bin/bison...not found Checking for program /usr/local/bin/bison...not found Checking for program /usr/bin/bison...found Defined make macro "BISON" to "/usr/bin/bison" Checking for program /home/wangzl/miniconda3/bin/flex...not found Checking for program /home/wangzl/projects/moose/python/peacock/flex...not found Checking for program /home/wangzl/software/fftwmpi/bin/flex...not found Checking for program /home/wangzl/software/byacc/bin/flex...not found Checking for program /home/wangzl/software/m4/bin/flex...not found Checking for program /home/wangzl/software/flex-2.6.4/bin/flex...found Defined make macro "FLEX" to "/home/wangzl/software/flex-2.6.4/bin/flex" Checking for functions [gzwrite] in library ['-lz'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char gzwrite(); static void _check_gzwrite() { gzwrite(); } int main() { _check_gzwrite();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -lz -lstdc++ -ldl Defined "HAVE_LIBZ" to "1" Adding ['-lz'] to LIBS Checking for functions [pthread_barrier_destroy] in library ['-lpthread'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char pthread_barrier_destroy(); static void _check_pthread_barrier_destroy() { pthread_barrier_destroy(); } int main() { _check_pthread_barrier_destroy();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBPTHREAD" to "1" Adding ['-lpthread'] to LIBS Checking for functions [sin] in library ['-lm'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /tmp/petsc-wjcu960y/config.libraries/conftest.c:4:6: warning: conflicting types for built-in function ???sin??? [-Wbuiltin-declaration-mismatch] char sin(); ^~~ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char sin(); static void _check_sin() { sin(); } int main() { _check_sin();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBM" to "1" Adding ['-lm'] to LIBS Checking for functions [timer_create] in library ['-lrt'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char timer_create(); static void _check_timer_create() { timer_create(); } int main() { _check_timer_create();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBRT" to "1" Adding ['-lrt'] to LIBS Executing: uname -s stdout: Linux Have to rebuild PTSCOTCH, /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/Makefile.inc != /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/lib/petsc/conf/pkg.conf.ptscotch =============================================================================== Compiling PTScotch; this may take several minutes =============================================================================== Executing: cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src && make clean ptesmumps esmumps stdout: /usr/bin/mkdir -p ../bin /usr/bin/mkdir -p ../include /usr/bin/mkdir -p ../lib (cd libscotch ; make clean) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' rm -f *~ *.o lib*.a parser_yy.c parser_ly.h parser_ll.c *scotch.h *scotchf.h y.output *dummysizes make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' (cd scotch ; make clean) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' rm -f *~ *.o acpl amk_ccc amk_fft2 amk_grf amk_hy amk_m2 amk_p2 atst gbase gcv *ggath *gmap gmk_hy gmk_m2 gmk_m3 gmk_msh gmk_ub2 gmtst *gord gotst gout *gpart *gscat *gtst mcv mmk_m2 mmk_m3 mord mtst make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' (cd libscotchmetis ; make clean) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' rm -f *~ *.o lib*.a metis.h parmetis.h make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' (cd check ; make clean) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/check' rm -f *~ *.o make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/check' (cd esmumps ; make clean) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' rm -f *~ common.h *.o lib*.a main_esmumps make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' (cd libscotch ; make VERSION=6 RELEASE=0 PATCHLEVEL=6 scotch && make install) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make CC="mpicc" CCD="mpicc" \ scotch.h \ scotchf.h \ libscotch.a \ libscotcherr.a \ libscotcherrexit.a make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch.c -o arch.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 dummysizes.c -o dummysizes -lz -lpthread -lm -lrt ./dummysizes "-s" library.h scotch.h ./dummysizes "-s" library_f.h scotchf.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_build.c -o arch_build.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_build2.c -o arch_build2.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_cmplt.c -o arch_cmplt.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_cmpltw.c -o arch_cmpltw.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_deco.c -o arch_deco.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_deco2.c -o arch_deco2.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_dist.c -o arch_dist.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_hcub.c -o arch_hcub.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_mesh.c -o arch_mesh.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_sub.c -o arch_sub.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_tleaf.c -o arch_tleaf.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_torus.c -o arch_torus.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_vcmplt.c -o arch_vcmplt.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c arch_vhcub.c -o arch_vhcub.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph.c -o bgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_bd.c -o bgraph_bipart_bd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_df.c -o bgraph_bipart_df.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_ex.c -o bgraph_bipart_ex.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_fm.c -o bgraph_bipart_fm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_gg.c -o bgraph_bipart_gg.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_gp.c -o bgraph_bipart_gp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_ml.c -o bgraph_bipart_ml.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_st.c -o bgraph_bipart_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_bipart_zr.c -o bgraph_bipart_zr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_check.c -o bgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c bgraph_store.c -o bgraph_store.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common.c -DSCOTCH_COMMON_RENAME -o common.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_file.c -DSCOTCH_COMMON_RENAME -o common_file.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_file_compress.c -DSCOTCH_COMMON_RENAME -o common_file_compress.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_file_decompress.c -DSCOTCH_COMMON_RENAME -o common_file_decompress.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_integer.c -DSCOTCH_COMMON_RENAME -o common_integer.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_memory.c -DSCOTCH_COMMON_RENAME -o common_memory.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_string.c -DSCOTCH_COMMON_RENAME -o common_string.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_stub.c -DSCOTCH_COMMON_RENAME -o common_stub.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c common_thread.c -DSCOTCH_COMMON_RENAME -o common_thread.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c fibo.c -o fibo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c gain.c -o gain.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c geom.c -o geom.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph.c -o graph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_base.c -o graph_base.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_band.c -o graph_band.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_check.c -o graph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_clone.c -o graph_clone.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_coarsen.c -o graph_coarsen.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_diam.c -o graph_diam.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_ielo.c -o graph_ielo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_induce.c -o graph_induce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_io.c -o graph_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_io_chac.c -o graph_io_chac.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_io_habo.c -o graph_io_habo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_io_mmkt.c -o graph_io_mmkt.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_io_scot.c -o graph_io_scot.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_list.c -o graph_list.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c graph_match.c -o graph_match.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hall_order_hd.c -o hall_order_hd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hall_order_hf.c -o hall_order_hf.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hall_order_hx.c -o hall_order_hx.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph.c -o hgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_check.c -o hgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_induce.c -o hgraph_induce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_bl.c -o hgraph_order_bl.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_cc.c -o hgraph_order_cc.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_cp.c -o hgraph_order_cp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_gp.c -o hgraph_order_gp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_hd.c -o hgraph_order_hd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_hf.c -o hgraph_order_hf.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_hx.c -o hgraph_order_hx.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_kp.c -o hgraph_order_kp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_nd.c -o hgraph_order_nd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_si.c -o hgraph_order_si.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hgraph_order_st.c -o hgraph_order_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh.c -o hmesh.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_check.c -o hmesh_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_hgraph.c -o hmesh_hgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_induce.c -o hmesh_induce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_mesh.c -o hmesh_mesh.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_bl.c -o hmesh_order_bl.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_cp.c -o hmesh_order_cp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_gr.c -o hmesh_order_gr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_gp.c -o hmesh_order_gp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_hd.c -o hmesh_order_hd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_hf.c -o hmesh_order_hf.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_hx.c -o hmesh_order_hx.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_nd.c -o hmesh_order_nd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_si.c -o hmesh_order_si.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c hmesh_order_st.c -o hmesh_order_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph.c -o kgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_band.c -o kgraph_band.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_check.c -o kgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_bd.c -o kgraph_map_bd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_cp.c -o kgraph_map_cp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_df.c -o kgraph_map_df.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_ex.c -o kgraph_map_ex.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_fm.c -o kgraph_map_fm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_ml.c -o kgraph_map_ml.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_rb.c -o kgraph_map_rb.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_rb_map.c -o kgraph_map_rb_map.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_rb_part.c -o kgraph_map_rb_part.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_map_st.c -o kgraph_map_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c kgraph_store.c -o kgraph_store.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_arch.c -o library_arch.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_arch_f.c -o library_arch_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_arch_build.c -o library_arch_build.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_arch_build_f.c -o library_arch_build_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_common_f.c -o library_common_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_geom.c -o library_geom.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_geom_f.c -o library_geom_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph.c -o library_graph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_f.c -o library_graph_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_base.c -o library_graph_base.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_base_f.c -o library_graph_base_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_check.c -o library_graph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_check_f.c -o library_graph_check_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_coarsen.c -o library_graph_coarsen.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_coarsen_f.c -o library_graph_coarsen_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_color.c -o library_graph_color.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_color_f.c -o library_graph_color_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_diam.c -o library_graph_diam.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_diam_f.c -o library_graph_diam_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_induce.c -o library_graph_induce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_induce_f.c -o library_graph_induce_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_chac.c -o library_graph_io_chac.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_chac_f.c -o library_graph_io_chac_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_habo.c -o library_graph_io_habo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_habo_f.c -o library_graph_io_habo_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_mmkt.c -o library_graph_io_mmkt.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_mmkt_f.c -o library_graph_io_mmkt_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_scot.c -o library_graph_io_scot.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_io_scot_f.c -o library_graph_io_scot_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_map.c -o library_graph_map.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_map_f.c -o library_graph_map_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_map_io.c -o library_graph_map_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_map_io_f.c -o library_graph_map_io_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_map_view.c -o library_graph_map_view.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_map_view_f.c -o library_graph_map_view_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_order.c -o library_graph_order.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_order_f.c -o library_graph_order_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_part_ovl.c -o library_graph_part_ovl.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_graph_part_ovl_f.c -o library_graph_part_ovl_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mapping.c -o library_mapping.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_memory.c -o library_memory.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_memory_f.c -o library_memory_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh.c -o library_mesh.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_f.c -o library_mesh_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_graph.c -o library_mesh_graph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_graph_f.c -o library_mesh_graph_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_io_habo.c -o library_mesh_io_habo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_io_habo_f.c -o library_mesh_io_habo_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_io_scot.c -o library_mesh_io_scot.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_io_scot_f.c -o library_mesh_io_scot_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_order.c -o library_mesh_order.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_mesh_order_f.c -o library_mesh_order_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_order.c -o library_order.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_parser.c -o library_parser.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_parser_f.c -o library_parser_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_random.c -o library_random.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_random_f.c -o library_random_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_strat.c -o library_strat.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_version.c -o library_version.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_version_f.c -o library_version_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mapping.c -o mapping.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mapping_io.c -o mapping_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh.c -o mesh.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh_check.c -o mesh_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh_coarsen.c -o mesh_coarsen.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh_graph.c -o mesh_graph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh_induce_sepa.c -o mesh_induce_sepa.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh_io.c -o mesh_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh_io_habo.c -o mesh_io_habo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c mesh_io_scot.c -o mesh_io_scot.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c order.c -o order.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c order_check.c -o order_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c order_io.c -o order_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c parser.c -o parser.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 (/usr/bin/bison -y -d -v parser_yy.y && \ /usr/bin/mv y.tab.c parser_yy.c && \ /usr/bin/mv y.tab.h parser_ly.h) || \ /usr/bin/cp last_resort/parser_ly.h last_resort/parser_yy.c . mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c parser_yy.c -o parser_yy.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 (/home/wangzl/software/flex-2.6.4/bin/flex parser_ll.l && \ /usr/bin/mv lex.yy.c parser_ll.c) || \ /usr/bin/cp last_resort/parser_ll.c . mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c parser_ll.c -o parser_ll.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph.c -o vgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_check.c -o vgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_bd.c -o vgraph_separate_bd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_df.c -o vgraph_separate_df.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_es.c -o vgraph_separate_es.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_fm.c -o vgraph_separate_fm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_gg.c -o vgraph_separate_gg.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_gp.c -o vgraph_separate_gp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_ml.c -o vgraph_separate_ml.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_st.c -o vgraph_separate_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_th.c -o vgraph_separate_th.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_vw.c -o vgraph_separate_vw.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_separate_zr.c -o vgraph_separate_zr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vgraph_store.c -o vgraph_store.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh.c -o vmesh.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_check.c -o vmesh_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_separate_fm.c -o vmesh_separate_fm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_separate_gg.c -o vmesh_separate_gg.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_separate_gr.c -o vmesh_separate_gr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_separate_ml.c -o vmesh_separate_ml.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_separate_zr.c -o vmesh_separate_zr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_separate_st.c -o vmesh_separate_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c vmesh_store.c -o vmesh_store.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph.c -o wgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_check.c -o wgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_part_fm.c -o wgraph_part_fm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_part_gg.c -o wgraph_part_gg.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_part_gp.c -o wgraph_part_gp.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_part_ml.c -o wgraph_part_ml.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_part_rb.c -o wgraph_part_rb.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_part_st.c -o wgraph_part_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_part_zr.c -o wgraph_part_zr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c wgraph_store.c -o wgraph_store.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 /usr/bin/ar cr libscotch.a arch.o arch_build.o arch_build2.o arch_cmplt.o arch_cmpltw.o arch_deco.o arch_deco2.o arch_dist.o arch_hcub.o arch_mesh.o arch_sub.o arch_tleaf.o arch_torus.o arch_vcmplt.o arch_vhcub.o bgraph.o bgraph_bipart_bd.o bgraph_bipart_df.o bgraph_bipart_ex.o bgraph_bipart_fm.o bgraph_bipart_gg.o bgraph_bipart_gp.o bgraph_bipart_ml.o bgraph_bipart_st.o bgraph_bipart_zr.o bgraph_check.o bgraph_store.o common.o common_file.o common_file_compress.o common_file_decompress.o common_integer.o common_memory.o common_string.o common_stub.o common_thread.o fibo.o gain.o geom.o graph.o graph_base.o graph_band.o graph_check.o graph_clone.o graph_coarsen.o graph_diam.o graph_ielo.o graph_induce.o graph_io.o graph_io_chac.o graph_io_habo.o graph_io_mmkt.o graph_io_scot.o graph_list.o graph_match.o hall_order_hd.o hall_order_hf.o hall_order_hx.o hgraph.o hgraph_check.o hgraph_induce.o hgraph_order_bl.o hgraph_order_cc.o hgraph_order_cp.o hgraph_order_gp.o hgraph_order_hd.o hgraph_order_hf.o hgraph_order_hx.o hgraph_order_kp.o hgraph_order_nd.o hgraph_order_si.o hgraph_order_st.o hmesh.o hmesh_check.o hmesh_hgraph.o hmesh_induce.o hmesh_mesh.o hmesh_order_bl.o hmesh_order_cp.o hmesh_order_gr.o hmesh_order_gp.o hmesh_order_hd.o hmesh_order_hf.o hmesh_order_hx.o hmesh_order_nd.o hmesh_order_si.o hmesh_order_st.o kgraph.o kgraph_band.o kgraph_check.o kgraph_map_bd.o kgraph_map_cp.o kgraph_map_df.o kgraph_map_ex.o kgraph_map_fm.o kgraph_map_ml.o kgraph_map_rb.o kgraph_map_rb_map.o kgraph_map_rb_part.o kgraph_map_st.o kgraph_store.o library_arch.o library_arch_f.o library_arch_build.o library_arch_build_f.o library_common_f.o library_geom.o library_geom_f.o library_graph.o library_graph_f.o library_graph_base.o library_graph_base_f.o library_graph_check.o library_graph_check_f.o library_graph_coarsen.o library_graph_coarsen_f.o library_graph_color.o library_graph_color_f.o library_graph_diam.o library_graph_diam_f.o library_graph_induce.o library_graph_induce_f.o library_graph_io_chac.o library_graph_io_chac_f.o library_graph_io_habo.o library_graph_io_habo_f.o library_graph_io_mmkt.o library_graph_io_mmkt_f.o library_graph_io_scot.o library_graph_io_scot_f.o library_graph_map.o library_graph_map_f.o library_graph_map_io.o library_graph_map_io_f.o library_graph_map_view.o library_graph_map_view_f.o library_graph_order.o library_graph_order_f.o library_graph_part_ovl.o library_graph_part_ovl_f.o library_mapping.o library_memory.o library_memory_f.o library_mesh.o library_mesh_f.o library_mesh_graph.o library_mesh_graph_f.o library_mesh_io_habo.o library_mesh_io_habo_f.o library_mesh_io_scot.o library_mesh_io_scot_f.o library_mesh_order.o library_mesh_order_f.o library_order.o library_parser.o library_parser_f.o library_random.o library_random_f.o library_strat.o library_version.o library_version_f.o mapping.o mapping_io.o mesh.o mesh_check.o mesh_coarsen.o mesh_graph.o mesh_induce_sepa.o mesh_io.o mesh_io_habo.o mesh_io_scot.o order.o order_check.o order_io.o parser.o parser_ll.o parser_yy.o vgraph.o vgraph_check.o vgraph_separate_bd.o vgraph_separate_df.o vgraph_separate_es.o vgraph_separate_fm.o vgraph_separate_gg.o vgraph_separate_gp.o vgraph_separate_ml.o vgraph_separate_st.o vgraph_separate_th.o vgraph_separate_vw.o vgraph_separate_zr.o vgraph_store.o vmesh.o vmesh_check.o vmesh_separate_fm.o vmesh_separate_gg.o vmesh_separate_gr.o vmesh_separate_ml.o vmesh_separate_zr.o vmesh_separate_st.o vmesh_store.o wgraph.o wgraph_check.o wgraph_part_fm.o wgraph_part_gg.o wgraph_part_gp.o wgraph_part_ml.o wgraph_part_rb.o wgraph_part_st.o wgraph_part_zr.o wgraph_store.o /usr/bin/ranlib libscotch.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_error.c -o library_error.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 /usr/bin/ar cr libscotcherr.a library_error.o /usr/bin/ranlib libscotcherr.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -c library_error_exit.c -o library_error_exit.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 /usr/bin/ar cr libscotcherrexit.a library_error_exit.o /usr/bin/ranlib libscotcherrexit.a make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' /usr/bin/cp scotch.h ../../include /usr/bin/cp scotchf.h ../../include /usr/bin/cp libscotch.a libscotcherr*.a ../../lib make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' (cd scotch ; make VERSION=6 RELEASE=0 PATCHLEVEL=6 scotch && make install) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make CC="mpicc" SCOTCHLIB=scotch \ acpl \ amk_ccc \ amk_fft2 \ amk_grf \ amk_hy \ amk_m2 \ amk_p2 \ atst \ gbase \ gcv \ gmap \ gmk_hy \ gmk_m2 \ gmk_m3 \ gmk_msh \ gmk_ub2 \ gmtst \ gord \ gotst \ gout \ gpart \ gscat \ gtst \ mcv \ mmk_m2 \ mmk_m3 \ mord \ mtst make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch acpl.c -o acpl -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch amk_ccc.c -o amk_ccc -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch amk_fft2.c -o amk_fft2 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch amk_grf.c -o amk_grf -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch amk_hy.c -o amk_hy -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch amk_m2.c -o amk_m2 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch amk_p2.c -o amk_p2 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch atst.c -o atst -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gbase.c -o gbase -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gcv.c -o gcv -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmap.c -o gmap -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmk_hy.c -o gmk_hy -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmk_m2.c -o gmk_m2 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmk_m3.c -o gmk_m3 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmk_msh.c -o gmk_msh -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmk_ub2.c -o gmk_ub2 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmtst.c -o gmtst -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gord.c -o gord -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gotst.c -o gotst -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../libscotch -I../../include gout_c.c gout_o.c -o gout -L../../lib -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gmap.c -DSCOTCH_COMPILE_PART -o gpart -L../../lib -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gscat.c -o gscat -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch gtst.c -o gtst -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch mcv.c -o mcv -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch mmk_m2.c -o mmk_m2 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch mmk_m3.c -o mmk_m3 -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch mord.c -o mord -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch mtst.c -o mtst -L../../lib -lscotch -lscotch -lscotcherrexit -lz -lpthread -lm -lrt make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make CC="mpicc" SCOTCHLIB=scotch \ acpl \ amk_ccc \ amk_fft2 \ amk_grf \ amk_hy \ amk_m2 \ amk_p2 \ atst \ gbase \ gcv \ gmap \ gmk_hy \ gmk_m2 \ gmk_m3 \ gmk_msh \ gmk_ub2 \ gmtst \ gord \ gotst \ gout \ gpart \ gscat \ gtst \ mcv \ mmk_m2 \ mmk_m3 \ mord \ mtst make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make[2]: 'acpl' is up to date. make[2]: 'amk_ccc' is up to date. make[2]: 'amk_fft2' is up to date. make[2]: 'amk_grf' is up to date. make[2]: 'amk_hy' is up to date. make[2]: 'amk_m2' is up to date. make[2]: 'amk_p2' is up to date. make[2]: 'atst' is up to date. make[2]: 'gbase' is up to date. make[2]: 'gcv' is up to date. make[2]: 'gmap' is up to date. make[2]: 'gmk_hy' is up to date. make[2]: 'gmk_m2' is up to date. make[2]: 'gmk_m3' is up to date. make[2]: 'gmk_msh' is up to date. make[2]: 'gmk_ub2' is up to date. make[2]: 'gmtst' is up to date. make[2]: 'gord' is up to date. make[2]: 'gotst' is up to date. make[2]: 'gout' is up to date. make[2]: 'gpart' is up to date. make[2]: 'gscat' is up to date. make[2]: 'gtst' is up to date. make[2]: 'mcv' is up to date. make[2]: 'mmk_m2' is up to date. make[2]: 'mmk_m3' is up to date. make[2]: 'mord' is up to date. make[2]: 'mtst' is up to date. make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' /usr/bin/cp acpl amk_ccc amk_fft2 amk_grf amk_hy amk_m2 amk_p2 atst gbase gcv gmap gmk_hy gmk_m2 gmk_m3 gmk_msh gmk_ub2 gmtst gord gotst gout gpart *gtst gscat mcv mmk_m2 mmk_m3 mord mtst ../../bin make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' (cd libscotchmetis ; make scotch && make install) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make CC="mpicc" SCOTCHLIB=ptscotch \ metis.h \ libscotchmetis.a make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' ../libscotch/dummysizes library_metis.h metis.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch -c metis_graph_order.c -o metis_graph_order.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch -c metis_graph_order_f.c -o metis_graph_order_f.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch -c metis_graph_part.c -o metis_graph_part.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch -c metis_graph_part_f.c -o metis_graph_part_f.o /usr/bin/ar cr libscotchmetis.a metis_graph_order.o metis_graph_order_f.o metis_graph_part.o metis_graph_part_f.o /usr/bin/ranlib libscotchmetis.a make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make CC="mpicc" SCOTCHLIB=ptscotch \ metis.h \ libscotchmetis.a make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make[2]: 'metis.h' is up to date. make[2]: 'libscotchmetis.a' is up to date. make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' /usr/bin/cp metis.h ../../include /usr/bin/cp libscotchmetis.a ../../lib make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' (cd libscotch ; make VERSION=6 RELEASE=0 PATCHLEVEL=6 ptscotch && make ptinstall) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make CC="mpicc" CCD="mpicc" \ scotch.h \ scotchf.h \ libscotch.a \ libscotcherr.a \ libscotcherrexit.a make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make[2]: 'scotch.h' is up to date. make[2]: 'scotchf.h' is up to date. make[2]: 'libscotch.a' is up to date. make[2]: 'libscotcherr.a' is up to date. make[2]: 'libscotcherrexit.a' is up to date. make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make CFLAGS="-fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH" CC="mpicc" \ ptscotch.h \ ptscotchf.h \ libptscotch.a \ libptscotcherr.a \ libptscotcherrexit.a make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 dummysizes.c -o ptdummysizes -lz -lpthread -lm -lrt ./ptdummysizes "-s" library_pt.h ptscotch.h ./ptdummysizes "-s" library_pt_f.h ptscotchf.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph.c -o bdgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_bipart_bd.c -o bdgraph_bipart_bd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_bipart_df.c -o bdgraph_bipart_df.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_bipart_ex.c -o bdgraph_bipart_ex.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_bipart_ml.c -o bdgraph_bipart_ml.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_bipart_sq.c -o bdgraph_bipart_sq.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_bipart_st.c -o bdgraph_bipart_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_bipart_zr.c -o bdgraph_bipart_zr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_check.c -o bdgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_gather_all.c -o bdgraph_gather_all.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c bdgraph_store.c -o bdgraph_store.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c comm.c -o comm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph.c -o dgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_allreduce.c -o dgraph_allreduce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_band.c -o dgraph_band.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_build.c -o dgraph_build.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_build_grid3d.c -o dgraph_build_grid3d.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_build_hcub.c -o dgraph_build_hcub.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_check.c -o dgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_coarsen.c -o dgraph_coarsen.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_fold.c -o dgraph_fold.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_fold_comm.c -o dgraph_fold_comm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_fold_dup.c -o dgraph_fold_dup.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_gather.c -o dgraph_gather.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_gather_all.c -o dgraph_gather_all.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_ghst.c -o dgraph_ghst.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_halo.c -o dgraph_halo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_induce.c -o dgraph_induce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_io_load.c -o dgraph_io_load.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_io_save.c -o dgraph_io_save.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_match.c -o dgraph_match.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_match_sync_coll.c -o dgraph_match_sync_coll.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_match_sync_ptop.c -o dgraph_match_sync_ptop.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_match_check.c -o dgraph_match_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_redist.c -o dgraph_redist.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_scatter.c -o dgraph_scatter.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dgraph_view.c -o dgraph_view.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dmapping.c -o dmapping.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dmapping_io.c -o dmapping_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dorder.c -o dorder.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dorder_gather.c -o dorder_gather.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dorder_io.c -o dorder_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dorder_io_block.c -o dorder_io_block.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dorder_io_tree.c -o dorder_io_tree.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dorder_perm.c -o dorder_perm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c dorder_tree_dist.c -o dorder_tree_dist.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph.c -o hdgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_check.c -o hdgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_fold.c -o hdgraph_fold.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_gather.c -o hdgraph_gather.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_induce.c -o hdgraph_induce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_order_nd.c -o hdgraph_order_nd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_order_si.c -o hdgraph_order_si.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_order_sq.c -o hdgraph_order_sq.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c hdgraph_order_st.c -o hdgraph_order_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c kdgraph.c -o kdgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c kdgraph_gather.c -o kdgraph_gather.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c kdgraph_map_rb.c -o kdgraph_map_rb.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c kdgraph_map_rb_map.c -o kdgraph_map_rb_map.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c kdgraph_map_rb_part.c -o kdgraph_map_rb_part.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c kdgraph_map_st.c -o kdgraph_map_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph.c -o library_dgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_f.c -o library_dgraph_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_band.c -o library_dgraph_band.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_band_f.c -o library_dgraph_band_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_build.c -o library_dgraph_build.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_build_f.c -o library_dgraph_build_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_build_grid3d.c -o library_dgraph_build_grid3d.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_build_grid3d_f.c -o library_dgraph_build_grid3d_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_check.c -o library_dgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_check_f.c -o library_dgraph_check_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_coarsen.c -o library_dgraph_coarsen.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_coarsen_f.c -o library_dgraph_coarsen_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_gather.c -o library_dgraph_gather.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_gather_f.c -o library_dgraph_gather_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_grow.c -o library_dgraph_grow.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_halo.c -o library_dgraph_halo.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_halo_f.c -o library_dgraph_halo_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_induce.c -o library_dgraph_induce.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_induce_f.c -o library_dgraph_induce_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_io_load.c -o library_dgraph_io_load.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_io_load_f.c -o library_dgraph_io_load_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_io_save.c -o library_dgraph_io_save.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_io_save_f.c -o library_dgraph_io_save_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_map.c -o library_dgraph_map.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_map_f.c -o library_dgraph_map_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_map_view.c -o library_dgraph_map_view.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_map_view_f.c -o library_dgraph_map_view_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order.c -o library_dgraph_order.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_f.c -o library_dgraph_order_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_gather.c -o library_dgraph_order_gather.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_gather_f.c -o library_dgraph_order_gather_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_io.c -o library_dgraph_order_io.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_io_f.c -o library_dgraph_order_io_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_io_block.c -o library_dgraph_order_io_block.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_io_block_f.c -o library_dgraph_order_io_block_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_perm.c -o library_dgraph_order_perm.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_perm_f.c -o library_dgraph_order_perm_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_tree_dist.c -o library_dgraph_order_tree_dist.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_order_tree_dist_f.c -o library_dgraph_order_tree_dist_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_redist.c -o library_dgraph_redist.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_redist_f.c -o library_dgraph_redist_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_scatter.c -o library_dgraph_scatter.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_scatter_f.c -o library_dgraph_scatter_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_stat.c -o library_dgraph_stat.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dgraph_stat_f.c -o library_dgraph_stat_f.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dmapping.c -o library_dmapping.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_dorder.c -o library_dorder.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph.c -o vdgraph.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_check.c -o vdgraph_check.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_gather_all.c -o vdgraph_gather_all.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_separate_bd.c -o vdgraph_separate_bd.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_separate_df.c -o vdgraph_separate_df.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_separate_ml.c -o vdgraph_separate_ml.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_separate_sq.c -o vdgraph_separate_sq.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_separate_st.c -o vdgraph_separate_st.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_separate_zr.c -o vdgraph_separate_zr.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c vdgraph_store.c -o vdgraph_store.o -DSCOTCH_VERSION_NUM=6 -DSCOTCH_RELEASE_NUM=0 -DSCOTCH_PATCHLEVEL_NUM=6 /usr/bin/ar cr libptscotch.a bdgraph.o bdgraph_bipart_bd.o bdgraph_bipart_df.o bdgraph_bipart_ex.o bdgraph_bipart_ml.o bdgraph_bipart_sq.o bdgraph_bipart_st.o bdgraph_bipart_zr.o bdgraph_check.o bdgraph_gather_all.o bdgraph_store.o comm.o dgraph.o dgraph_allreduce.o dgraph_band.o dgraph_build.o dgraph_build_grid3d.o dgraph_build_hcub.o dgraph_check.o dgraph_coarsen.o dgraph_fold.o dgraph_fold_comm.o dgraph_fold_dup.o dgraph_gather.o dgraph_gather_all.o dgraph_ghst.o dgraph_halo.o dgraph_induce.o dgraph_io_load.o dgraph_io_save.o dgraph_match.o dgraph_match_sync_coll.o dgraph_match_sync_ptop.o dgraph_match_check.o dgraph_redist.o dgraph_scatter.o dgraph_view.o dmapping.o dmapping_io.o dorder.o dorder_gather.o dorder_io.o dorder_io_block.o dorder_io_tree.o dorder_perm.o dorder_tree_dist.o hdgraph.o hdgraph_check.o hdgraph_fold.o hdgraph_gather.o hdgraph_induce.o hdgraph_order_nd.o hdgraph_order_si.o hdgraph_order_sq.o hdgraph_order_st.o kdgraph.o kdgraph_gather.o kdgraph_map_rb.o kdgraph_map_rb_map.o kdgraph_map_rb_part.o kdgraph_map_st.o library_dgraph.o library_dgraph_f.o library_dgraph_band.o library_dgraph_band_f.o library_dgraph_build.o library_dgraph_build_f.o library_dgraph_build_grid3d.o library_dgraph_build_grid3d_f.o library_dgraph_check.o library_dgraph_check_f.o library_dgraph_coarsen.o library_dgraph_coarsen_f.o library_dgraph_gather.o library_dgraph_gather_f.o library_dgraph_grow.o library_dgraph_halo.o library_dgraph_halo_f.o library_dgraph_induce.o library_dgraph_induce_f.o library_dgraph_io_load.o library_dgraph_io_load_f.o library_dgraph_io_save.o library_dgraph_io_save_f.o library_dgraph_map.o library_dgraph_map_f.o library_dgraph_map_view.o library_dgraph_map_view_f.o library_dgraph_order.o library_dgraph_order_f.o library_dgraph_order_gather.o library_dgraph_order_gather_f.o library_dgraph_order_io.o library_dgraph_order_io_f.o library_dgraph_order_io_block.o library_dgraph_order_io_block_f.o library_dgraph_order_perm.o library_dgraph_order_perm_f.o library_dgraph_order_tree_dist.o library_dgraph_order_tree_dist_f.o library_dgraph_redist.o library_dgraph_redist_f.o library_dgraph_scatter.o library_dgraph_scatter_f.o library_dgraph_stat.o library_dgraph_stat_f.o library_dmapping.o library_dorder.o vdgraph.o vdgraph_check.o vdgraph_gather_all.o vdgraph_separate_bd.o vdgraph_separate_df.o vdgraph_separate_ml.o vdgraph_separate_sq.o vdgraph_separate_st.o vdgraph_separate_zr.o vdgraph_store.o /usr/bin/ranlib libptscotch.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_error.c -o library_error_pt.o /usr/bin/ar cr libptscotcherr.a library_error_pt.o /usr/bin/ranlib libptscotcherr.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -c library_error_exit.c -o library_error_exit_pt.o /usr/bin/ar cr libptscotcherrexit.a library_error_exit_pt.o /usr/bin/ranlib libptscotcherrexit.a make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' /usr/bin/cp ptscotch.h ../../include /usr/bin/cp ptscotchf.h ../../include /usr/bin/cp libptscotch.a libptscotcherr*.a ../../lib make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotch' (cd scotch ; make VERSION=6 RELEASE=0 PATCHLEVEL=6 ptscotch && make ptinstall) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make CC="mpicc" SCOTCHLIB=ptscotch \ dggath \ dgmap \ dgord \ dgpart \ dgscat \ dgtst make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch dggath.c -o dggath -L../../lib -lptscotch -lscotch -lptscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch dgmap.c -o dgmap -L../../lib -lptscotch -lscotch -lptscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch dgord.c -o dgord -L../../lib -lptscotch -lscotch -lptscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch dgmap.c -DSCOTCH_COMPILE_PART -o dgpart -L../../lib -lptscotch -lscotch -lptscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch dgscat.c -o dgscat -L../../lib -lptscotch -lscotch -lptscotcherrexit -lz -lpthread -lm -lrt mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -I../libscotch dgtst.c -o dgtst -L../../lib -lptscotch -lscotch -lptscotcherrexit -lz -lpthread -lm -lrt make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make CC="mpicc" SCOTCHLIB=ptscotch \ dggath \ dgmap \ dgord \ dgpart \ dgscat \ dgtst make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' make[2]: 'dggath' is up to date. make[2]: 'dgmap' is up to date. make[2]: 'dgord' is up to date. make[2]: 'dgpart' is up to date. make[2]: 'dgscat' is up to date. make[2]: 'dgtst' is up to date. make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' /usr/bin/cp dggath dgmap dgord dgpart dgscat dgtst ../../bin make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/scotch' (cd libscotchmetis ; make ptscotch && make ptinstall) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make CFLAGS="-fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH" CC="mpicc" SCOTCHLIB=ptscotch \ parmetis.h \ libptscotchparmetis.a make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' ../libscotch/dummysizes library_parmetis.h parmetis.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -I../../include -I../libscotch -c parmetis_dgraph_order.c -o parmetis_dgraph_order.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -I../../include -I../libscotch -c parmetis_dgraph_order_f.c -o parmetis_dgraph_order_f.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -I../../include -I../libscotch -c parmetis_dgraph_part.c -o parmetis_dgraph_part.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict= -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH -I../../include -I../libscotch -c parmetis_dgraph_part_f.c -o parmetis_dgraph_part_f.o /usr/bin/ar cr libptscotchparmetis.a parmetis_dgraph_order.o parmetis_dgraph_order_f.o parmetis_dgraph_part.o parmetis_dgraph_part_f.o /usr/bin/ranlib libptscotchparmetis.a make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make CFLAGS="-fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -DSCOTCH_PTSCOTCH" CC="mpicc" SCOTCHLIB=ptscotch \ parmetis.h \ libptscotchparmetis.a make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' make[2]: 'parmetis.h' is up to date. make[2]: 'libptscotchparmetis.a' is up to date. make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' /usr/bin/cp parmetis.h ../../include /usr/bin/cp libptscotchparmetis.a ../../lib make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/libscotchmetis' (cd esmumps ; make ptscotch && make ptinstall) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' rm -f *~ common.h *.o lib*.a main_esmumps make CC="mpicc" CCD="mpicc" \ libesmumps.a \ main_esmumps make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' cat module.h ../libscotch/common.h > common.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c graph_graph.c -o graph_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order.c -o order.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order_scotch_graph.c -o order_scotch_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c dof.c -o dof.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol.c -o symbol.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol_fax_graph.c -o symbol_fax_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps.c -o esmumps.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_f.c -o esmumps_f.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_strats.c -o esmumps_strats.o /usr/bin/ar cr libesmumps.a graph_graph.o order.o order_scotch_graph.o dof.o symbol.o symbol_fax_graph.o esmumps.o esmumps_f.o esmumps_strats.o /usr/bin/ranlib libesmumps.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include main_esmumps.c -o main_esmumps -L. -lesmumps -L../../lib -lscotch -lscotcherrexit -lz -lpthread -lm -lrt make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' rm -f *~ common.h *.o lib*.a main_esmumps make CC="mpicc" CCD="mpicc" \ libesmumps.a \ main_esmumps make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' cat module.h ../libscotch/common.h > common.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c graph_graph.c -o graph_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order.c -o order.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order_scotch_graph.c -o order_scotch_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c dof.c -o dof.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol.c -o symbol.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol_fax_graph.c -o symbol_fax_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps.c -o esmumps.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_f.c -o esmumps_f.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_strats.c -o esmumps_strats.o /usr/bin/ar cr libesmumps.a graph_graph.o order.o order_scotch_graph.o dof.o symbol.o symbol_fax_graph.o esmumps.o esmumps_f.o esmumps_strats.o /usr/bin/ranlib libesmumps.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include main_esmumps.c -o main_esmumps -L. -lesmumps -L../../lib -lscotch -lscotcherrexit -lz -lpthread -lm -lrt make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' /usr/bin/cp esmumps.h ../../include /usr/bin/cp libesmumps.a ../../lib/libptesmumps.a make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' (cd esmumps ; make scotch && make install) make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' rm -f *~ common.h *.o lib*.a main_esmumps make CC="mpicc" CCD="mpicc" \ libesmumps.a \ main_esmumps make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' cat module.h ../libscotch/common.h > common.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c graph_graph.c -o graph_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order.c -o order.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order_scotch_graph.c -o order_scotch_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c dof.c -o dof.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol.c -o symbol.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol_fax_graph.c -o symbol_fax_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps.c -o esmumps.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_f.c -o esmumps_f.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_strats.c -o esmumps_strats.o /usr/bin/ar cr libesmumps.a graph_graph.o order.o order_scotch_graph.o dof.o symbol.o symbol_fax_graph.o esmumps.o esmumps_f.o esmumps_strats.o /usr/bin/ranlib libesmumps.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include main_esmumps.c -o main_esmumps -L. -lesmumps -L../../lib -lscotch -lscotcherrexit -lz -lpthread -lm -lrt make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' make[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' rm -f *~ common.h *.o lib*.a main_esmumps make CC="mpicc" CCD="mpicc" \ libesmumps.a \ main_esmumps make[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' cat module.h ../libscotch/common.h > common.h mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c graph_graph.c -o graph_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order.c -o order.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c order_scotch_graph.c -o order_scotch_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c dof.c -o dof.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol.c -o symbol.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c symbol_fax_graph.c -o symbol_fax_graph.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps.c -o esmumps.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_f.c -o esmumps_f.o mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include -c esmumps_strats.c -o esmumps_strats.o /usr/bin/ar cr libesmumps.a graph_graph.o order.o order_scotch_graph.o dof.o symbol.o symbol_fax_graph.o esmumps.o esmumps_f.o esmumps_strats.o /usr/bin/ranlib libesmumps.a mpicc -fPIC -fstack-protector -g -O -fopenmp -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_PTHREAD -DCOMMON_RANDOM_FIXED_SEED -DSCOTCH_RENAME -Drestrict="" -DINTSIZE32 -DSCOTCH_METIS_PREFIX -I../../include main_esmumps.c -o main_esmumps -L. -lesmumps -L../../lib -lscotch -lscotcherrexit -lz -lpthread -lm -lrt make[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' /usr/bin/cp esmumps.h ../../include /usr/bin/cp libesmumps.a ../../lib make[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f/src/esmumps' =============================================================================== Installing PTScotch; this may take several minutes =============================================================================== Executing: mkdir -p /home/wangzl/moose-compilers/petsc-3.11.4/include && mkdir -p /home/wangzl/moose-compilers/petsc-3.11.4/lib && cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-scotch-c15036faac5f && cp -f lib/*.a /home/wangzl/moose-compilers/petsc-3.11.4/lib/. && cp -f include/*.h /home/wangzl/moose-compilers/petsc-3.11.4/include/. ********Output of running make on PTSCOTCH follows ******* ********End of Output of running make on PTSCOTCH ******* Checking for library in Download PTSCOTCH: ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libptesmumps.a', 'libptscotchparmetis.a', 'libptscotch.a', 'libptscotcherr.a', 'libesmumps.a', 'libscotch.a', 'libscotcherr.a'] Contents: ['include', 'lib', 'share'] ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [SCOTCH_archBuild] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libptesmumps.a', 'libptscotchparmetis.a', 'libptscotch.a', 'libptscotcherr.a', 'libesmumps.a', 'libscotch.a', 'libscotcherr.a'] ['libm.a'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char SCOTCH_archBuild(); static void _check_SCOTCH_archBuild() { SCOTCH_archBuild(); } int main() { _check_SCOTCH_archBuild();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lm -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBPTESMUMPS" to "1" Defined "HAVE_LIBPTSCOTCHPARMETIS" to "1" Defined "HAVE_LIBPTSCOTCH" to "1" Defined "HAVE_LIBPTSCOTCHERR" to "1" Defined "HAVE_LIBESMUMPS" to "1" Defined "HAVE_LIBSCOTCH" to "1" Defined "HAVE_LIBSCOTCHERR" to "1" Checking for functions [SCOTCH_ParMETIS_V3_NodeND] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libptesmumps.a', 'libptscotchparmetis.a', 'libptscotch.a', 'libptscotcherr.a', 'libesmumps.a', 'libscotch.a', 'libscotcherr.a'] ['libm.a'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char SCOTCH_ParMETIS_V3_NodeND(); static void _check_SCOTCH_ParMETIS_V3_NodeND() { SCOTCH_ParMETIS_V3_NodeND(); } int main() { _check_SCOTCH_ParMETIS_V3_NodeND();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lm -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_SCOTCH_PARMETIS_V3_NODEND" to "1" Checking for headers Download PTSCOTCH: ['/home/wangzl/moose-compilers/petsc-3.11.4/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['ptscotch.h'] in ['/home/wangzl/moose-compilers/petsc-3.11.4/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Checking include with compiler flags var CPPFLAGS ['/home/wangzl/moose-compilers/petsc-3.11.4/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] Executing: mpicc -E -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.headers -I/home/wangzl/moose-compilers/petsc-3.11.4/include -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0 -I/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include /tmp/petsc-wjcu960y/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Found header files ['ptscotch.h'] in ['/home/wangzl/moose-compilers/petsc-3.11.4/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkSharedLibrary from config.packages.PTScotch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.PTScotch(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST checkDependencies from config.packages.metis(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.metis(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.metis(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/metis.py:47) TESTING: configureLibrary from config.packages.metis(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/metis.py:47) ================================================================================== Checking for a functional metis Looking for METIS at git.metis, hg.metis or a directory starting with ['petsc-pkg-metis'] Could not locate an existing copy of METIS: ['git.slepc', 'petsc-pkg-scotch-c15036faac5f'] Downloading metis =============================================================================== Trying to download file:///home/wangzl/packages/petsc-pkg-metis-49e61501c498.tar.gz for METIS =============================================================================== Downloading file:///home/wangzl/packages/petsc-pkg-metis-49e61501c498.tar.gz to /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/_d_petsc-pkg-metis-49e61501c498.tar.gz Extracting /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/_d_petsc-pkg-metis-49e61501c498.tar.gz Executing: cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages; chmod -R a+r petsc-pkg-metis-49e61501c498;find petsc-pkg-metis-49e61501c498 -type d -name "*" -exec chmod a+rx {} \; Looking for METIS at git.metis, hg.metis or a directory starting with ['petsc-pkg-metis'] Found a copy of METIS in petsc-pkg-metis-49e61501c498 Have to rebuild METIS, /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/metis.petscconf != /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/lib/petsc/conf/pkg.conf.metis =============================================================================== Configuring METIS with cmake, this may take several minutes =============================================================================== Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/home/wangzl/moose-compilers/petsc-3.11.4 -DCMAKE_VERBOSE_MAKEFILE=1 -DCMAKE_C_COMPILER="mpicc" -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib -DCMAKE_C_FLAGS:STRING="-fPIC -fstack-protector -g -O -fopenmp" -DCMAKE_C_FLAGS_DEBUG:STRING="-fPIC -fstack-protector -g -O -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-fPIC -fstack-protector -g -O -fopenmp" -DCMAKE_CXX_COMPILER="mpicxx" -DCMAKE_CXX_FLAGS:STRING="-fstack-protector -g -O -fopenmp -fPIC" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-fstack-protector -g -O -fopenmp -fPIC" -DCMAKE_CXX_FLAGS_RELEASE:STRING="-fstack-protector -g -O -fopenmp -fPIC" -DCMAKE_Fortran_COMPILER="mpif90" -DCMAKE_Fortran_FLAGS:STRING="-fPIC -ffree-line-length-0 -g -O -fopenmp" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-fPIC -ffree-line-length-0 -g -O -fopenmp" -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-fPIC -ffree-line-length-0 -g -O -fopenmp" -DBUILD_SHARED_LIBS=on -DGKLIB_PATH=../GKlib -DGKRAND=1 -DSHARED=1 -DMATH_LIB="-lm" stdout: -- The C compiler identification is GNU 7.5.0 -- Check for working C compiler: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -- Check for working C compiler: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc - works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Looking for execinfo.h -- Looking for execinfo.h - found -- Looking for getline -- Looking for getline - found -- Configuring done -- Generating done -- Build files have been written to: /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build =============================================================================== Compiling and installing METIS; this may take several minutes =============================================================================== Executing: /usr/bin/gmake -j16 -l30.0 stdout: /usr/bin/cmake -S/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498 -B/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles/progress.marks /usr/bin/gmake -f CMakeFiles/Makefile2 all gmake[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/gmake -f libmetis/CMakeFiles/metis.dir/build.make libmetis/CMakeFiles/metis.dir/depend gmake[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498 /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis/CMakeFiles/metis.dir/DependInfo.cmake --color= Scanning dependencies of target metis gmake[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/gmake -f libmetis/CMakeFiles/metis.dir/build.make libmetis/CMakeFiles/metis.dir/build gmake[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' [ 1%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/b64.c.o [ 3%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/error.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/b64.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/b64.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/error.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/error.c [ 6%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/fs.c.o [ 6%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/blas.c.o [ 9%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/csr.c.o [ 9%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/gkregex.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/blas.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/blas.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/fs.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/fs.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/csr.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/csr.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/gkregex.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/gkregex.c [ 12%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/evaluate.c.o [ 12%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/getopt.c.o [ 14%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/fkvkselect.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/getopt.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/getopt.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/evaluate.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/evaluate.c [ 16%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/itemsets.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/fkvkselect.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/fkvkselect.c [ 17%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/io.c.o [ 19%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/memory.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/itemsets.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/itemsets.c [ 22%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/htable.c.o [ 22%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/omp.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/io.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/io.c [ 24%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/graph.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/memory.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/memory.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/omp.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/omp.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/htable.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/htable.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/graph.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/graph.c [ 25%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/mcore.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/mcore.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/mcore.c [ 27%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/pdb.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/pdb.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/pdb.c [ 29%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/pqueue.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/pqueue.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/pqueue.c [ 30%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/random.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/random.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/random.c [ 32%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/rw.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/rw.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/rw.c [ 33%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/seq.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/seq.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/seq.c [ 35%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/sort.c.o [ 37%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/string.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/sort.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/sort.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/string.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/string.c [ 38%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/timers.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/timers.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/timers.c [ 40%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/tokenizer.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/tokenizer.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/tokenizer.c [ 41%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/util.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/util.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/util.c [ 43%] Building C object libmetis/CMakeFiles/metis.dir/auxapi.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/auxapi.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/auxapi.c [ 45%] Building C object libmetis/CMakeFiles/metis.dir/balance.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/balance.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/balance.c [ 46%] Building C object libmetis/CMakeFiles/metis.dir/bucketsort.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/bucketsort.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/bucketsort.c [ 48%] Building C object libmetis/CMakeFiles/metis.dir/checkgraph.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/checkgraph.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/checkgraph.c [ 50%] Building C object libmetis/CMakeFiles/metis.dir/coarsen.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/coarsen.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/coarsen.c [ 51%] Building C object libmetis/CMakeFiles/metis.dir/compress.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/compress.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/compress.c [ 53%] Building C object libmetis/CMakeFiles/metis.dir/contig.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/contig.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/contig.c [ 54%] Building C object libmetis/CMakeFiles/metis.dir/debug.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/debug.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/debug.c [ 56%] Building C object libmetis/CMakeFiles/metis.dir/fm.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/fm.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/fm.c [ 58%] Building C object libmetis/CMakeFiles/metis.dir/fortran.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/fortran.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/fortran.c [ 59%] Building C object libmetis/CMakeFiles/metis.dir/frename.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/frename.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/frename.c [ 61%] Building C object libmetis/CMakeFiles/metis.dir/gklib.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/gklib.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/gklib.c [ 62%] Building C object libmetis/CMakeFiles/metis.dir/graph.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/graph.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/graph.c [ 64%] Building C object libmetis/CMakeFiles/metis.dir/initpart.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/initpart.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/initpart.c [ 66%] Building C object libmetis/CMakeFiles/metis.dir/kmetis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/kmetis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/kmetis.c [ 67%] Building C object libmetis/CMakeFiles/metis.dir/kwayfm.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/kwayfm.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/kwayfm.c [ 69%] Building C object libmetis/CMakeFiles/metis.dir/kwayrefine.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/kwayrefine.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/kwayrefine.c [ 70%] Building C object libmetis/CMakeFiles/metis.dir/mcutil.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mcutil.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mcutil.c [ 72%] Building C object libmetis/CMakeFiles/metis.dir/mesh.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mesh.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mesh.c [ 74%] Building C object libmetis/CMakeFiles/metis.dir/meshpart.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/meshpart.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/meshpart.c [ 75%] Building C object libmetis/CMakeFiles/metis.dir/minconn.c.o [ 77%] Building C object libmetis/CMakeFiles/metis.dir/mincover.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/minconn.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/minconn.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mincover.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mincover.c [ 79%] Building C object libmetis/CMakeFiles/metis.dir/mmd.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mmd.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mmd.c [ 80%] Building C object libmetis/CMakeFiles/metis.dir/ometis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/ometis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/ometis.c [ 82%] Building C object libmetis/CMakeFiles/metis.dir/options.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/options.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/options.c [ 83%] Building C object libmetis/CMakeFiles/metis.dir/parmetis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/parmetis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/parmetis.c [ 85%] Building C object libmetis/CMakeFiles/metis.dir/pmetis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/pmetis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/pmetis.c [ 87%] Building C object libmetis/CMakeFiles/metis.dir/refine.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/refine.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/refine.c [ 88%] Building C object libmetis/CMakeFiles/metis.dir/separator.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/separator.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/separator.c [ 90%] Building C object libmetis/CMakeFiles/metis.dir/sfm.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/sfm.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/sfm.c [ 91%] Building C object libmetis/CMakeFiles/metis.dir/srefine.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/srefine.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/srefine.c [ 93%] Building C object libmetis/CMakeFiles/metis.dir/stat.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/stat.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/stat.c [ 95%] Building C object libmetis/CMakeFiles/metis.dir/timing.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/timing.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/timing.c [ 96%] Building C object libmetis/CMakeFiles/metis.dir/util.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/util.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/util.c [ 98%] Building C object libmetis/CMakeFiles/metis.dir/wspace.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/wspace.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/wspace.c [100%] Linking C shared library libmetis.so cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /usr/bin/cmake -E cmake_link_script CMakeFiles/metis.dir/link.txt --verbose=1 /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -fPIC -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -shared -Wl,-soname,libmetis.so -o libmetis.so CMakeFiles/metis.dir/__/GKlib/b64.c.o CMakeFiles/metis.dir/__/GKlib/blas.c.o CMakeFiles/metis.dir/__/GKlib/csr.c.o CMakeFiles/metis.dir/__/GKlib/error.c.o CMakeFiles/metis.dir/__/GKlib/evaluate.c.o CMakeFiles/metis.dir/__/GKlib/fkvkselect.c.o CMakeFiles/metis.dir/__/GKlib/fs.c.o CMakeFiles/metis.dir/__/GKlib/getopt.c.o CMakeFiles/metis.dir/__/GKlib/gkregex.c.o CMakeFiles/metis.dir/__/GKlib/graph.c.o CMakeFiles/metis.dir/__/GKlib/htable.c.o CMakeFiles/metis.dir/__/GKlib/io.c.o CMakeFiles/metis.dir/__/GKlib/itemsets.c.o CMakeFiles/metis.dir/__/GKlib/mcore.c.o CMakeFiles/metis.dir/__/GKlib/memory.c.o CMakeFiles/metis.dir/__/GKlib/omp.c.o CMakeFiles/metis.dir/__/GKlib/pdb.c.o CMakeFiles/metis.dir/__/GKlib/pqueue.c.o CMakeFiles/metis.dir/__/GKlib/random.c.o CMakeFiles/metis.dir/__/GKlib/rw.c.o CMakeFiles/metis.dir/__/GKlib/seq.c.o CMakeFiles/metis.dir/__/GKlib/sort.c.o CMakeFiles/metis.dir/__/GKlib/string.c.o CMakeFiles/metis.dir/__/GKlib/timers.c.o CMakeFiles/metis.dir/__/GKlib/tokenizer.c.o CMakeFiles/metis.dir/__/GKlib/util.c.o CMakeFiles/metis.dir/auxapi.c.o CMakeFiles/metis.dir/balance.c.o CMakeFiles/metis.dir/bucketsort.c.o CMakeFiles/metis.dir/checkgraph.c.o CMakeFiles/metis.dir/coarsen.c.o CMakeFiles/metis.dir/compress.c.o CMakeFiles/metis.dir/contig.c.o CMakeFiles/metis.dir/debug.c.o CMakeFiles/metis.dir/fm.c.o CMakeFiles/metis.dir/fortran.c.o CMakeFiles/metis.dir/frename.c.o CMakeFiles/metis.dir/gklib.c.o CMakeFiles/metis.dir/graph.c.o CMakeFiles/metis.dir/initpart.c.o CMakeFiles/metis.dir/kmetis.c.o CMakeFiles/metis.dir/kwayfm.c.o CMakeFiles/metis.dir/kwayrefine.c.o CMakeFiles/metis.dir/mcutil.c.o CMakeFiles/metis.dir/mesh.c.o CMakeFiles/metis.dir/meshpart.c.o CMakeFiles/metis.dir/minconn.c.o CMakeFiles/metis.dir/mincover.c.o CMakeFiles/metis.dir/mmd.c.o CMakeFiles/metis.dir/ometis.c.o CMakeFiles/metis.dir/options.c.o CMakeFiles/metis.dir/parmetis.c.o CMakeFiles/metis.dir/pmetis.c.o CMakeFiles/metis.dir/refine.c.o CMakeFiles/metis.dir/separator.c.o CMakeFiles/metis.dir/sfm.c.o CMakeFiles/metis.dir/srefine.c.o CMakeFiles/metis.dir/stat.c.o CMakeFiles/metis.dir/timing.c.o CMakeFiles/metis.dir/util.c.o CMakeFiles/metis.dir/wspace.c.o -lm gmake[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' [100%] Built target metis gmake[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/cmake -E cmake_progress_start /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles 0 Executing: /usr/bin/gmake install stdout: /usr/bin/cmake -S/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498 -B/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles/progress.marks /usr/bin/gmake -f CMakeFiles/Makefile2 all gmake[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/gmake -f libmetis/CMakeFiles/metis.dir/build.make libmetis/CMakeFiles/metis.dir/depend gmake[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498 /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis/CMakeFiles/metis.dir/DependInfo.cmake --color= gmake[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/gmake -f libmetis/CMakeFiles/metis.dir/build.make libmetis/CMakeFiles/metis.dir/build gmake[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' gmake[2]: Nothing to be done for 'libmetis/CMakeFiles/metis.dir/build'. gmake[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' [100%] Built target metis gmake[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/cmake -E cmake_progress_start /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles 0 /usr/bin/gmake -f CMakeFiles/Makefile2 preinstall gmake[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' gmake[1]: Nothing to be done for 'preinstall'. gmake[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' Install the project... /usr/bin/cmake -P cmake_install.cmake -- Install configuration: "" -- Installing: /home/wangzl/moose-compilers/petsc-3.11.4/include/metis.h -- Installing: /home/wangzl/moose-compilers/petsc-3.11.4/include/gklib_tls.h -- Installing: /home/wangzl/moose-compilers/petsc-3.11.4/lib/libmetis.so -- Up-to-date: /home/wangzl/moose-compilers/petsc-3.11.4/include/gklib_defs.h -- Up-to-date: /home/wangzl/moose-compilers/petsc-3.11.4/include/gklib_rename.h ********Output of running make on METIS follows ******* -- The C compiler identification is GNU 7.5.0 -- Check for working C compiler: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -- Check for working C compiler: /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc - works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Looking for execinfo.h -- Looking for execinfo.h - found -- Looking for getline -- Looking for getline - found -- Configuring done -- Generating done -- Build files have been written to: /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-buildCMake Warning: Manually-specified variables were not used by the project: CMAKE_CXX_COMPILER CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE CMAKE_Fortran_COMPILER CMAKE_Fortran_FLAGS CMAKE_Fortran_FLAGS_DEBUG CMAKE_Fortran_FLAGS_RELEASE /usr/bin/cmake -S/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498 -B/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles/progress.marks /usr/bin/gmake -f CMakeFiles/Makefile2 all gmake[1]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/gmake -f libmetis/CMakeFiles/metis.dir/build.make libmetis/CMakeFiles/metis.dir/depend gmake[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498 /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis/CMakeFiles/metis.dir/DependInfo.cmake --color= Scanning dependencies of target metis gmake[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/gmake -f libmetis/CMakeFiles/metis.dir/build.make libmetis/CMakeFiles/metis.dir/build gmake[2]: Entering directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' [ 1%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/b64.c.o [ 3%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/error.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/b64.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/b64.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/error.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/error.c [ 6%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/fs.c.o [ 6%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/blas.c.o [ 9%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/csr.c.o [ 9%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/gkregex.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/blas.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/blas.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/fs.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/fs.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/csr.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/csr.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/gkregex.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/gkregex.c [ 12%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/evaluate.c.o [ 12%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/getopt.c.o [ 14%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/fkvkselect.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/getopt.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/getopt.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/evaluate.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/evaluate.c [ 16%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/itemsets.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/fkvkselect.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/fkvkselect.c [ 17%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/io.c.o [ 19%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/memory.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/itemsets.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/itemsets.c [ 22%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/htable.c.o [ 22%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/omp.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/io.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/io.c [ 24%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/graph.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/memory.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/memory.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/omp.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/omp.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/htable.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/htable.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/graph.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/graph.c [ 25%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/mcore.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/mcore.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/mcore.c [ 27%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/pdb.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/pdb.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/pdb.c [ 29%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/pqueue.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/pqueue.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/pqueue.c [ 30%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/random.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/random.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/random.c [ 32%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/rw.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/rw.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/rw.c [ 33%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/seq.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/seq.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/seq.c [ 35%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/sort.c.o [ 37%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/string.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/sort.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/sort.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/string.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/string.c [ 38%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/timers.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/timers.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/timers.c [ 40%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/tokenizer.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/tokenizer.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/tokenizer.c [ 41%] Building C object libmetis/CMakeFiles/metis.dir/__/GKlib/util.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/__/GKlib/util.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/util.c [ 43%] Building C object libmetis/CMakeFiles/metis.dir/auxapi.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/auxapi.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/auxapi.c [ 45%] Building C object libmetis/CMakeFiles/metis.dir/balance.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/balance.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/balance.c [ 46%] Building C object libmetis/CMakeFiles/metis.dir/bucketsort.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/bucketsort.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/bucketsort.c [ 48%] Building C object libmetis/CMakeFiles/metis.dir/checkgraph.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/checkgraph.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/checkgraph.c [ 50%] Building C object libmetis/CMakeFiles/metis.dir/coarsen.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/coarsen.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/coarsen.c [ 51%] Building C object libmetis/CMakeFiles/metis.dir/compress.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/compress.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/compress.c [ 53%] Building C object libmetis/CMakeFiles/metis.dir/contig.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/contig.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/contig.c [ 54%] Building C object libmetis/CMakeFiles/metis.dir/debug.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/debug.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/debug.c [ 56%] Building C object libmetis/CMakeFiles/metis.dir/fm.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/fm.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/fm.c [ 58%] Building C object libmetis/CMakeFiles/metis.dir/fortran.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/fortran.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/fortran.c [ 59%] Building C object libmetis/CMakeFiles/metis.dir/frename.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/frename.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/frename.c [ 61%] Building C object libmetis/CMakeFiles/metis.dir/gklib.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/gklib.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/gklib.c [ 62%] Building C object libmetis/CMakeFiles/metis.dir/graph.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/graph.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/graph.c [ 64%] Building C object libmetis/CMakeFiles/metis.dir/initpart.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/initpart.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/initpart.c [ 66%] Building C object libmetis/CMakeFiles/metis.dir/kmetis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/kmetis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/kmetis.c [ 67%] Building C object libmetis/CMakeFiles/metis.dir/kwayfm.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/kwayfm.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/kwayfm.c [ 69%] Building C object libmetis/CMakeFiles/metis.dir/kwayrefine.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/kwayrefine.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/kwayrefine.c [ 70%] Building C object libmetis/CMakeFiles/metis.dir/mcutil.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mcutil.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mcutil.c [ 72%] Building C object libmetis/CMakeFiles/metis.dir/mesh.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mesh.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mesh.c [ 74%] Building C object libmetis/CMakeFiles/metis.dir/meshpart.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/meshpart.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/meshpart.c [ 75%] Building C object libmetis/CMakeFiles/metis.dir/minconn.c.o [ 77%] Building C object libmetis/CMakeFiles/metis.dir/mincover.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/minconn.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/minconn.c cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mincover.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mincover.c [ 79%] Building C object libmetis/CMakeFiles/metis.dir/mmd.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/mmd.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/mmd.c [ 80%] Building C object libmetis/CMakeFiles/metis.dir/ometis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/ometis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/ometis.c [ 82%] Building C object libmetis/CMakeFiles/metis.dir/options.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/options.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/options.c [ 83%] Building C object libmetis/CMakeFiles/metis.dir/parmetis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/parmetis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/parmetis.c [ 85%] Building C object libmetis/CMakeFiles/metis.dir/pmetis.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/pmetis.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/pmetis.c [ 87%] Building C object libmetis/CMakeFiles/metis.dir/refine.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/refine.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/refine.c [ 88%] Building C object libmetis/CMakeFiles/metis.dir/separator.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/separator.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/separator.c [ 90%] Building C object libmetis/CMakeFiles/metis.dir/sfm.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/sfm.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/sfm.c [ 91%] Building C object libmetis/CMakeFiles/metis.dir/srefine.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/srefine.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/srefine.c [ 93%] Building C object libmetis/CMakeFiles/metis.dir/stat.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/stat.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/stat.c [ 95%] Building C object libmetis/CMakeFiles/metis.dir/timing.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/timing.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/timing.c [ 96%] Building C object libmetis/CMakeFiles/metis.dir/util.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/util.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/util.c [ 98%] Building C object libmetis/CMakeFiles/metis.dir/wspace.c.o cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -Dmetis_EXPORTS -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/include -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/. -I/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/include -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -fPIC -o CMakeFiles/metis.dir/wspace.c.o -c /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/libmetis/wspace.c [100%] Linking C shared library libmetis.so cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/libmetis && /usr/bin/cmake -E cmake_link_script CMakeFiles/metis.dir/link.txt --verbose=1 /opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/bin/mpicc -fPIC -fPIC -fstack-protector -g -O -fopenmp -DLINUX -D_FILE_OFFSET_BITS=64 -std=c99 -fno-strict-aliasing -fPIC -Wall -pedantic -Wno-unused-variable -Wno-unknown-pragmas -DNDEBUG -DNDEBUG2 -DUSE_GKRAND -DHAVE_EXECINFO_H -DHAVE_GETLINE -shared -Wl,-soname,libmetis.so -o libmetis.so CMakeFiles/metis.dir/__/GKlib/b64.c.o CMakeFiles/metis.dir/__/GKlib/blas.c.o CMakeFiles/metis.dir/__/GKlib/csr.c.o CMakeFiles/metis.dir/__/GKlib/error.c.o CMakeFiles/metis.dir/__/GKlib/evaluate.c.o CMakeFiles/metis.dir/__/GKlib/fkvkselect.c.o CMakeFiles/metis.dir/__/GKlib/fs.c.o CMakeFiles/metis.dir/__/GKlib/getopt.c.o CMakeFiles/metis.dir/__/GKlib/gkregex.c.o CMakeFiles/metis.dir/__/GKlib/graph.c.o CMakeFiles/metis.dir/__/GKlib/htable.c.o CMakeFiles/metis.dir/__/GKlib/io.c.o CMakeFiles/metis.dir/__/GKlib/itemsets.c.o CMakeFiles/metis.dir/__/GKlib/mcore.c.o CMakeFiles/metis.dir/__/GKlib/memory.c.o CMakeFiles/metis.dir/__/GKlib/omp.c.o CMakeFiles/metis.dir/__/GKlib/pdb.c.o CMakeFiles/metis.dir/__/GKlib/pqueue.c.o CMakeFiles/metis.dir/__/GKlib/random.c.o CMakeFiles/metis.dir/__/GKlib/rw.c.o CMakeFiles/metis.dir/__/GKlib/seq.c.o CMakeFiles/metis.dir/__/GKlib/sort.c.o CMakeFiles/metis.dir/__/GKlib/string.c.o CMakeFiles/metis.dir/__/GKlib/timers.c.o CMakeFiles/metis.dir/__/GKlib/tokenizer.c.o CMakeFiles/metis.dir/__/GKlib/util.c.o CMakeFiles/metis.dir/auxapi.c.o CMakeFiles/metis.dir/balance.c.o CMakeFiles/metis.dir/bucketsort.c.o CMakeFiles/metis.dir/checkgraph.c.o CMakeFiles/metis.dir/coarsen.c.o CMakeFiles/metis.dir/compress.c.o CMakeFiles/metis.dir/contig.c.o CMakeFiles/metis.dir/debug.c.o CMakeFiles/metis.dir/fm.c.o CMakeFiles/metis.dir/fortran.c.o CMakeFiles/metis.dir/frename.c.o CMakeFiles/metis.dir/gklib.c.o CMakeFiles/metis.dir/graph.c.o CMakeFiles/metis.dir/initpart.c.o CMakeFiles/metis.dir/kmetis.c.o CMakeFiles/metis.dir/kwayfm.c.o CMakeFiles/metis.dir/kwayrefine.c.o CMakeFiles/metis.dir/mcutil.c.o CMakeFiles/metis.dir/mesh.c.o CMakeFiles/metis.dir/meshpart.c.o CMakeFiles/metis.dir/minconn.c.o CMakeFiles/metis.dir/mincover.c.o CMakeFiles/metis.dir/mmd.c.o CMakeFiles/metis.dir/ometis.c.o CMakeFiles/metis.dir/options.c.o CMakeFiles/metis.dir/parmetis.c.o CMakeFiles/metis.dir/pmetis.c.o CMakeFiles/metis.dir/refine.c.o CMakeFiles/metis.dir/separator.c.o CMakeFiles/metis.dir/sfm.c.o CMakeFiles/metis.dir/srefine.c.o CMakeFiles/metis.dir/stat.c.o CMakeFiles/metis.dir/timing.c.o CMakeFiles/metis.dir/util.c.o CMakeFiles/metis.dir/wspace.c.o -lm gmake[2]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' [100%] Built target metis gmake[1]: Leaving directory '/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build' /usr/bin/cmake -E cmake_progress_start /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/petsc-build/CMakeFiles 0/tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/error.c: In function ???gk_strerror???: /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/error.c:183:3: warning: implicit declaration of function ???strerror_r???; did you mean ???strerror???? [-Wimplicit-function-declaration] strerror_r(errnum, buf, 1024); ^~~~~~~~~~ strerror /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/getopt.c: In function ???gk_getopt_internal???: /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/getopt.c:343:5: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] if (gk_optind == 0) ^~ /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/getopt.c:345:7: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? optstring = gk_getopt_initialize (argc, argv, optstring); ^~~~~~~~~ /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/getopt.c:700:2: warning: this ???else??? clause does not guard... [-Wmisleading-indentation] else ^~~~ /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/getopt.c:703:4: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???else??? nextchar = NULL; ^~~~~~~~ /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/csr.c: In function ???gk_csr_Normalize???: /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/csr.c:1344:9: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] else if (norm == 1) ^~ /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/petsc-pkg-metis-49e61501c498/GKlib/csr.c:1346:11: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? for (j=ptr[i]; j??? may be used uninitialized in this function [-Wmaybe-uninitialized] clahr2.f:307:0: Warning: ???REALPART_EXPR ??? may be used uninitialized in this function [-Wmaybe-uninitialized] claic1.f:197:21: TMP = SQRT( S*CONJG( S )+C*CONJG( C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:251:19: T = C / ( B+SQRT( B*B+C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:253:19: T = SQRT( B*B+C ) - B 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:258:18: TMP = SQRT( SINE*CONJG( SINE )+COSINE*CONJG( COSINE ) ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:283:18: TMP = SQRT( S*CONJG( S )+C*CONJG( C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:341:19: T = C / ( B+SQRT( ABS( B*B-C ) ) ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:352:22: T = -C / ( B+SQRT( B*B+C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:354:22: T = B - SQRT( B*B+C ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] claic1.f:360:18: TMP = SQRT( SINE*CONJG( SINE )+COSINE*CONJG( COSINE ) ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cla_lin_berr.f:151:21: TMP = (SAFE1 + CABS1(RES(I,J)))/AYB(I,J) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] clals0.f:413:0: $ DIFRJ ) / ( POLES( I, 2 )+DJ ) Warning: ???difrj??? may be used uninitialized in this function [-Wmaybe-uninitialized] clangb.f:150:0: REAL SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clange.f:140:0: REAL SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clangt.f:130:0: REAL ANORM, SCALE, SUM, TEMP Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] clanhb.f:157:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clanhe.f:149:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clanhp.f:142:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clanhs.f:134:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clanht.f:126:0: REAL ANORM, SCALE, SUM Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] clansb.f:155:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clansp.f:140:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clansy.f:148:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clantb.f:167:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clantp.f:151:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clantr.f:168:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] clarfgp.f:241:25: BETA = -SAVEALPHA 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] clasyf_rook.f:706:0: ELSE IF( ( P.EQ.JMAX ) .OR. ( ROWMAX.LE.COLMAX ) ) Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] clatrd.f:274:26: E( I-1 ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] clatrd.f:328:24: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] clauu2.f:168:18: AII = A( I, I ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] clauu2.f:187:18: AII = A( I, I ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cpbsvx.f:430:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] cpbsvx.f:430:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] cpoequb.f:180:15: S( 1 ) = A( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cpoequb.f:184:18: S( I ) = A( I, I ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cporfsx.f:473:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] cposvx.f:392:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] cposvx.f:392:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] cpotf2.f:180:18: AJJ = REAL( A( J, J ) ) - CDOTC( J-1, A( 1, J ), 1, 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cpotf2.f:207:18: AJJ = REAL( A( J, J ) ) - CDOTC( J-1, A( J, 1 ), LDA, 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cppsvx.f:393:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] cppsvx.f:393:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] cpptrf.f:195:18: AJJ = REAL( AP( JJ ) ) - CDOTC( J-1, AP( JC ), 1, AP( JC ), 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cpptri.f:167:18: AJJ = AP( JJ ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] cspr.f:220:0: JX = KX Warning: ???kx??? may be used uninitialized in this function [-Wmaybe-uninitialized] csptrf.f:602:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] csptrf.f:322:0: CALL CSWAP( KP-1, AP( KNC ), 1, AP( KPC ), 1 ) Warning: ???kpc??? may be used uninitialized in this function [-Wmaybe-uninitialized] cstedc.f:484:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] cstedc.f:485:0: RWORK( 1 ) = LRWMIN Warning: ???lrwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] cstedc.f:486:0: IWORK( 1 ) = LIWMIN Warning: ???liwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] cstein.f:352:0: $ XJ = XJM + PERTOL Warning: ???xjm??? may be used uninitialized in this function [-Wmaybe-uninitialized] cstein.f:425:0: IF( NRM.LT.STPCRT ) Warning: ???stpcrt??? may be used uninitialized in this function [-Wmaybe-uninitialized] cstein.f:400:0: IF( ABS( XJ-XJM ).GT.ORTOL ) Warning: ???ortol??? may be used uninitialized in this function [-Wmaybe-uninitialized] cstein.f:386:0: $ SASUM( BLKSIZ, WORK( INDRV1+1 ), 1 ) Warning: ???onenrm??? may be used uninitialized in this function [-Wmaybe-uninitialized] csyequb.f:275:16: AVG = AVG + S( I )*WORK( I ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] csyequb.f:292:15: C1 = ( N-2 ) * ( WORK( I ) - T*SI ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] csyequb.f:293:15: C0 = -(T*SI)*SI + 2*WORK( I )*SI - N*AVG 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] csyequb.f:327:16: AVG = AVG + ( U + WORK( I ) ) * D / N 1 Warning: Possible change of value in conversion from COMPLEX(4) to REAL(4) at (1) [-Wconversion] csyr.f:226:0: IX = KX Warning: ???kx??? may be used uninitialized in this function [-Wmaybe-uninitialized] csyrfsx.f:483:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] csysv.f:229:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from COMPLEX(4) to INTEGER(4) at (1) [-Wconversion] csysv_rook.f:262:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from COMPLEX(4) to INTEGER(4) at (1) [-Wconversion] csytf2.f:595:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] csytf2_rook.f:638:0: ELSE IF( ( P.EQ.JMAX ).OR.( ROWMAX.LE.COLMAX ) ) THEN Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] csytf2_rook.f:416:0: IF( KP.NE.KK ) THEN Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctfttr.f:459:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctfttr.f:356:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctgsyl.f:609:0: SCALE = SCALE2 Warning: ???scale2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctptri.f:230:0: $ AP( JCLAST ), AP( JC+1 ), 1 ) Warning: ???jclast??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctrsen.f:450:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctrttf.f:458:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctrttf.f:355:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ctzrzf.f:251:0: IWS = LDWORK*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:520:0: $ WORK(ITAUQ1), WORK(IORBDB), LORBDB, CHILDINFO ) Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:520:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:532:0: $ WORK(IORGQR), LORGQR, CHILDINFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:507:0: LORGLQ = LWORK-IORGLQ+1 Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:705:0: CALL CCOPY( M-P, WORK(IORBDB+P), 1, U2, 1 ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: $ CHILDINFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd2by1.f:553:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:526:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???itauq2??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:526:0: Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:526:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:579:0: $ WORK(IORGQR), LORGQRWORK, INFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:550:0: $ WORK(IORGLQ), LORGLQWORK, INFO ) Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:526:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: $ LBBCSDWORK, INFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] cuncsd.f:597:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] cungbr.f:240:18: LWKOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(4) to INTEGER(4) at (1) [-Wconversion] cungql.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] cungrq.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] cunmql.f:281:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] cunmrq.f:282:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: $ WORK( WSTART ), IWORK, INFO ) Warning: ???z??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???poles??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???ivt??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???is??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???ic??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???givnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???givcol??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???difr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dbdsdc.f:461:0: Warning: ???difl??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgbrfsx.f:523:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgbsvx.f:480:0: COLCND = MAX( RCMIN, SMLNUM ) / MIN( RCMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgbsvx.f:480:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgebak.f:242:19: K = SCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgebak.f:256:19: K = SCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgees.f:308:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgeesx.f:388:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgeesx.f:316:0: $ MAXWRK, MINWRK Warning: ???maxwrk??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgeev.f:283:24: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgeev.f:292:24: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgeev.f:299:24: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgeevx.f:423:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgehrd.f:300:0: DO 40 I = ILO, IHI - 1 - NX, NB Warning: ???nx??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgels.f:498:0: WORK( 1 ) = DBLE( WSIZE ) Warning: ???wsize??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgels.f:357:0: IF( .NOT.TPSD ) THEN Warning: ???tpsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgelss.f:260:28: LWORK_DGEQRF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:264:28: LWORK_DORMQR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:279:28: LWORK_DGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:283:28: LWORK_DORMBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:287:28: LWORK_DORGBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:311:31: LWORK_DGELQF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:315:31: LWORK_DGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:319:31: LWORK_DORMBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:323:31: LWORK_DORGBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:327:31: LWORK_DORMLQ=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:347:31: LWORK_DGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:351:31: LWORK_DORMBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:355:31: LWORK_DORGBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgelss.f:540:0: ELSE IF( N.GE.MNTHR .AND. LWORK.GE.4*M+M*M+ Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgeqlf.f:215:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgerfsx.f:496:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgerqf.f:215:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesdd.f:1349:0: $ WORK( IL ), M ) Warning: ???il??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesdd.f:921:0: $ WORK( IR ), LDWRKR ) Warning: ???ir??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesdd.f:988:0: IF( N.GE.MNTHR ) THEN Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesdd.f:609:0: LDWRKR = ( LWORK-N*N-3*N-BDSPAC ) / N Warning: ???bdspac??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesvd.f:317:25: LWORK_DGEQRF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:320:27: LWORK_DORGQR_N=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:322:27: LWORK_DORGQR_M=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:326:25: LWORK_DGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:330:27: LWORK_DORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:334:27: LWORK_DORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:450:28: LWORK_DGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:455:33: LWORK_DORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:461:33: LWORK_DORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:478:25: LWORK_DGELQF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:481:27: LWORK_DORGLQ_N=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:483:27: LWORK_DORGLQ_M=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:487:25: LWORK_DGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:491:27: LWORK_DORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:495:27: LWORK_DORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:610:28: LWORK_DGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:616:32: LWORK_DORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:622:32: LWORK_DORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgesvd.f:3461:0: IF( IE.LT.2 ) THEN Warning: ???ie??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesvd.f:3201:0: IF( LWORK.GE.WRKBL+LDA*M ) THEN Warning: ???wrkbl??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesvd.f:2063:0: IF( N.GE.MNTHR ) THEN Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesvd.f:237:0: INTEGER BDSPAC, BLK, CHUNK, I, IE, IERR, IR, ISCL, Warning: ???bdspac??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesvx.f:456:0: COLCND = MAX( RCMIN, SMLNUM ) / MIN( RCMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgesvx.f:456:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgetc2.f:204:0: A( N, N ) = SMIN Warning: ???smin??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgetc2.f:186:0: $ CALL DSWAP( N, A( 1, JPV ), 1, A( 1, I ), 1 ) Warning: ???jpv??? may be used uninitialized in this function [-Wmaybe-uninitialized] dgetc2.f:180:0: $ CALL DSWAP( N, A( IPV, 1 ), LDA, A( I, 1 ), LDA ) Warning: ???ipv??? may be used uninitialized in this function [-Wmaybe-uninitialized] dggbak.f:258:19: K = RSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggbak.f:268:19: K = RSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggbak.f:282:19: K = LSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggbak.f:292:19: K = LSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggbal.f:528:14: IR = LSCALE( I ) + SIGN( HALF, LSCALE( I ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggbal.f:536:14: JC = RSCALE( I ) + SIGN( HALF, RSCALE( I ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggglm.f:287:13: LOPT = WORK( M+NP+1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dgglse.f:282:13: LOPT = WORK( P+MN+1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggqrf.f:282:13: LOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dggrqf.f:281:13: LOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dhgeqz.f:1344:0: IF( ILZ ) THEN Warning: ???ilz??? may be used uninitialized in this function [-Wmaybe-uninitialized] dhgeqz.f:835:0: IF( ILQ ) THEN Warning: ???ilq??? may be used uninitialized in this function [-Wmaybe-uninitialized] dhgeqz.f:713:0: IF( .NOT.ILSCHR ) THEN Warning: ???ilschr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed0.f:386:0: $ IWORK( SUBPBS+1 ), INFO ) Warning: ???iwrem??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed0.f:386:0: Warning: ???iqptr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed0.f:386:0: Warning: ???iq??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed0.f:386:0: Warning: ???iprmpt??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed0.f:386:0: Warning: ???iperm??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed0.f:386:0: Warning: ???igivpt??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed0.f:386:0: Warning: ???igivcl??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed2.f:439:0: DLAMDA( K ) = D( PJ ) Warning: ???pj??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed6.f:404:0: $ TAU = TAU*SCLINV Warning: ???sclinv??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaed8.f:481:0: W( K ) = Z( JLAM ) Warning: ???jlam??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlag2s.f:143:25: SA( I, J ) = A( I, J ) 1 Warning: Possible change of value in conversion from REAL(8) to REAL(4) at (1) [-Wconversion] dlahqr.f:590:0: $ CS, SN ) Warning: ???i2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlahr2.f:305:0: A( K+NB, NB ) = EI Warning: ???ei??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlahrd.f:282:0: A( K+NB, NB ) = EI Warning: ???ei??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlals0.f:410:0: $ DIFRJ ) / ( POLES( I, 2 )+DJ ) Warning: ???difrj??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlangb.f:149:0: DOUBLE PRECISION SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlange.f:138:0: DOUBLE PRECISION SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlangt.f:130:0: DOUBLE PRECISION ANORM, SCALE, SUM, TEMP Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlanhs.f:132:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlansb.f:153:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlansf.f:233:0: DOUBLE PRECISION SCALE, S, VALUE, AA, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlansp.f:138:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlanst.f:124:0: DOUBLE PRECISION ANORM, SCALE, SUM Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlansy.f:146:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlantb.f:165:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlantp.f:149:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlantr.f:166:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlarrd.f:737:0: IF( W( JE ).GE.WUL .AND. IDISCU.GT.0 ) THEN Warning: ???wul??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlarrd.f:720:0: IF( W( JE ).LE.WLU .AND. IDISCL.GT.0 ) THEN Warning: ???wlu??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlarre.f:390:0: $ ((IRANGE.EQ.VALRNG).AND.(D(1).GT.VL).AND.(D(1).LE.VU)).OR. Warning: ???irange??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaruv.f:441:0: ISEED( 4 ) = IT4 Warning: ???it4??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaruv.f:440:0: ISEED( 3 ) = IT3 Warning: ???it3??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaruv.f:439:0: ISEED( 2 ) = IT2 Warning: ???it2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlaruv.f:438:0: ISEED( 1 ) = IT1 Warning: ???it1??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasd2.f:443:0: J = J + 1 Warning: ???jprev??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasd7.f:447:0: J = J + 1 Warning: ???jprev??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasd8.f:325:0: $ / ( DSIGMA( I )+DJ ) Warning: ???difrj??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasq4.f:420:0: TAU = S Warning: ???s??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasv2.f:319:0: SSMAX = SIGN( SSMAX, TSIGN ) Warning: ???tsign??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasy2.f:427:0: $ CALL DSWAP( 4, T16( 1, JPSV ), 1, T16( 1, I ), 1 ) Warning: ???jpsv??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasy2.f:421:0: CALL DSWAP( 4, T16( IPSV, 1 ), 4, T16( I, 1 ), 4 ) Warning: ???ipsv??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlasyf_rook.f:698:0: ELSE IF( ( P.EQ.JMAX ) .OR. ( ROWMAX.LE.COLMAX ) ) Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlat2s.f:152:28: SA( I, J ) = A( I, J ) 1 Warning: Possible change of value in conversion from REAL(8) to REAL(4) at (1) [-Wconversion] dlat2s.f:163:28: SA( I, J ) = A( I, J ) 1 Warning: Possible change of value in conversion from REAL(8) to REAL(4) at (1) [-Wconversion] dlatbs.f:794:0: X( J ) = X( J ) / TJJS - SUMJ Warning: ???tjjs??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlatps.f:775:0: X( J ) = X( J ) / TJJS - SUMJ Warning: ???tjjs??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlatrs.f:769:0: X( J ) = X( J ) / TJJS - SUMJ Warning: ???tjjs??? may be used uninitialized in this function [-Wmaybe-uninitialized] dlazq4.f:325:0: TAU = S Warning: ???s??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:478:0: $ WORK(ITAUQ1), WORK(IORBDB), LORBDB, CHILDINFO ) Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:478:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:478:0: Warning: ???itaup1??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:490:0: $ WORK(IORGQR), LORGQR, CHILDINFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:465:0: LORGLQ = LWORK-IORGLQ+1 Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:663:0: CALL DCOPY( M-P, WORK(IORBDB+P), 1, U2, 1 ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: $ CHILDINFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd2by1.f:511:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:489:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???itauq2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:489:0: Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:489:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:489:0: Warning: ???itaup1??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:542:0: $ WORK(IORGQR), LORGQRWORK, INFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:513:0: $ WORK(IORGLQ), LORGLQWORK, INFO ) Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:489:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: $ WORK(IB22E), WORK(IBBCSD), LBBCSDWORK, INFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorcsd.f:559:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorgbr.f:239:18: LWKOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dorgql.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dorgrq.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dormql.f:279:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dormrq.f:280:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dormrz.f:304:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] dpbsvx.f:432:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dpbsvx.f:432:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dporfsx.f:475:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dposvx.f:394:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dposvx.f:394:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dppsvx.f:394:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dppsvx.f:394:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] dspgvd.f:313:14: LWMIN = MAX( DBLE( LWMIN ), DBLE( WORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dspgvd.f:314:15: LIWMIN = MAX( DBLE( LIWMIN ), DBLE( IWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dsptrf.f:599:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] dsptrf.f:315:0: CALL DSWAP( KP-1, AP( KNC ), 1, AP( KPC ), 1 ) Warning: ???kpc??? may be used uninitialized in this function [-Wmaybe-uninitialized] dstedc.f:482:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] dstedc.f:483:0: IWORK( 1 ) = LIWMIN Warning: ???liwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] dstein.f:341:0: $ XJ = XJM + PERTOL Warning: ???xjm??? may be used uninitialized in this function [-Wmaybe-uninitialized] dstein.f:389:0: IF( ABS( XJ-XJM ).GT.ORTOL ) Warning: ???ortol??? may be used uninitialized in this function [-Wmaybe-uninitialized] dstein.f:375:0: $ DASUM( BLKSIZ, WORK( INDRV1+1 ), 1 ) Warning: ???onenrm??? may be used uninitialized in this function [-Wmaybe-uninitialized] dstein.f:392:0: DO 80 I = GPIND, J - 1 Warning: ???gpind??? may be used uninitialized in this function [-Wmaybe-uninitialized] dstein.f:409:0: IF( NRM.LT.DTPCRT ) Warning: ???dtpcrt??? may be used uninitialized in this function [-Wmaybe-uninitialized] dsyevx.f:541:0: WORK( 1 ) = LWKOPT Warning: ???lwkopt??? may be used uninitialized in this function [-Wmaybe-uninitialized] dsygvd.f:336:13: LOPT = MAX( DBLE( LOPT ), DBLE( WORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dsygvd.f:337:14: LIOPT = MAX( DBLE( LIOPT ), DBLE( IWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dsyrfsx.f:482:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dsysv.f:229:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dsysv_rook.f:262:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] dsytf2.f:593:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] dsytf2_rook.f:630:0: ELSE IF( ( P.EQ.JMAX ).OR.( ROWMAX.LE.COLMAX ) ) THEN Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] dsytf2_rook.f:408:0: IF( KP.NE.KK ) THEN Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtfttr.f:422:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtfttr.f:327:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtgsna.f:673:0: DIF( KS ) = COND Warning: ???cond??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtgsyl.f:606:0: SCALE = SCALE2 Warning: ???scale2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtptri.f:229:0: $ AP( JCLAST ), AP( JC+1 ), 1 ) Warning: ???jclast??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtrsen.f:563:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtrsen.f:564:0: IWORK( 1 ) = LIWMIN Warning: ???liwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtrttf.f:419:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtrttf.f:324:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] dtzrzf.f:251:0: IWS = LDWORK*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] iparmq.f:296:0: IPARMQ = 3*NS / 2 Warning: ???ns??? may be used uninitialized in this function [-Wmaybe-uninitialized] iparmq.f:293:0: IF( NH.LE.KNWSWP ) THEN Warning: ???nh??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: $ WORK( WSTART ), IWORK, INFO ) Warning: ???z??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???poles??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???ivt??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???is??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???ic??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???givnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???givcol??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???difr??? may be used uninitialized in this function [-Wmaybe-uninitialized] sbdsdc.f:461:0: Warning: ???difl??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgbrfsx.f:523:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgbsvx.f:482:0: COLCND = MAX( RCMIN, SMLNUM ) / MIN( RCMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgbsvx.f:482:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgebak.f:242:19: K = SCALE( I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgebak.f:256:19: K = SCALE( I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgees.f:308:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgeesx.f:388:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgeesx.f:316:0: $ MAXWRK, MINWRK Warning: ???maxwrk??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgeev.f:283:24: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgeev.f:292:24: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgeev.f:299:24: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgeevx.f:422:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgehrd.f:300:0: DO 40 I = ILO, IHI - 1 - NX, NB Warning: ???nx??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgelsd.f:511:0: IF( LWORK.GE.MAX( 4*M+M*LDA+MAX( M, 2*M-4, NRHS, N-3*M ), Warning: ???wlalsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgelsd.f:504:0: ELSE IF( N.GE.MNTHR .AND. LWORK.GE.4*M+M*M+ Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgels.f:498:0: WORK( 1 ) = REAL( WSIZE ) Warning: ???wsize??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgels.f:357:0: IF( .NOT.TPSD ) THEN Warning: ???tpsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgelss.f:259:28: LWORK_SGEQRF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:263:28: LWORK_SORMQR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:278:28: LWORK_SGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:282:28: LWORK_SORMBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:286:28: LWORK_SORGBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:310:31: LWORK_SGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:314:31: LWORK_SORMBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:318:31: LWORK_SORGBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:322:31: LWORK_SORMLQ=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:343:31: LWORK_SGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:347:31: LWORK_SORMBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:351:31: LWORK_SORGBR=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgelss.f:536:0: ELSE IF( N.GE.MNTHR .AND. LWORK.GE.4*M+M*M+ Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgeqlf.f:215:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgerfsx.f:496:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgerqf.f:218:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesdd.f:1349:0: $ WORK( IL ), M ) Warning: ???il??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesdd.f:921:0: $ WORK( IR ), LDWRKR ) Warning: ???ir??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesdd.f:988:0: IF( N.GE.MNTHR ) THEN Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesdd.f:609:0: LDWRKR = ( LWORK-N*N-3*N-BDSPAC ) / N Warning: ???bdspac??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesvd.f:317:25: LWORK_SGEQRF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:320:27: LWORK_SORGQR_N=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:322:27: LWORK_SORGQR_M=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:326:25: LWORK_SGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:330:27: LWORK_SORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:334:27: LWORK_SORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:450:28: LWORK_SGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:455:33: LWORK_SORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:461:33: LWORK_SORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:478:25: LWORK_SGELQF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:481:27: LWORK_SORGLQ_N=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:483:27: LWORK_SORGLQ_M=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:487:25: LWORK_SGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:491:27: LWORK_SORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:495:27: LWORK_SORGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:611:28: LWORK_SGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:617:32: LWORK_SORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:623:32: LWORK_SORGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgesvd.f:3462:0: IF( IE.LT.2 ) THEN Warning: ???ie??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesvd.f:3202:0: IF( LWORK.GE.WRKBL+LDA*M ) THEN Warning: ???wrkbl??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesvd.f:2064:0: IF( N.GE.MNTHR ) THEN Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesvd.f:237:0: INTEGER BDSPAC, BLK, CHUNK, I, IE, IERR, IR, ISCL, Warning: ???bdspac??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesvx.f:456:0: COLCND = MAX( RCMIN, SMLNUM ) / MIN( RCMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgesvx.f:456:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgetc2.f:204:0: A( N, N ) = SMIN Warning: ???smin??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgetc2.f:186:0: $ CALL SSWAP( N, A( 1, JPV ), 1, A( 1, I ), 1 ) Warning: ???jpv??? may be used uninitialized in this function [-Wmaybe-uninitialized] sgetc2.f:180:0: $ CALL SSWAP( N, A( IPV, 1 ), LDA, A( I, 1 ), LDA ) Warning: ???ipv??? may be used uninitialized in this function [-Wmaybe-uninitialized] sggbak.f:258:19: K = RSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggbak.f:268:19: K = RSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggbak.f:282:19: K = LSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggbak.f:292:19: K = LSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggbal.f:528:14: IR = LSCALE( I ) + SIGN( HALF, LSCALE( I ) ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggbal.f:536:14: JC = RSCALE( I ) + SIGN( HALF, RSCALE( I ) ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggglm.f:287:13: LOPT = WORK( M+NP+1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sgglse.f:282:13: LOPT = WORK( P+MN+1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggqrf.f:282:13: LOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sggrqf.f:281:13: LOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] shgeqz.f:1344:0: IF( ILZ ) THEN Warning: ???ilz??? may be used uninitialized in this function [-Wmaybe-uninitialized] shgeqz.f:835:0: IF( ILQ ) THEN Warning: ???ilq??? may be used uninitialized in this function [-Wmaybe-uninitialized] shgeqz.f:713:0: IF( .NOT.ILSCHR ) THEN Warning: ???ilschr??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed0.f:386:0: $ IWORK( SUBPBS+1 ), INFO ) Warning: ???iwrem??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed0.f:386:0: Warning: ???iqptr??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed0.f:386:0: Warning: ???iq??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed0.f:386:0: Warning: ???iprmpt??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed0.f:386:0: Warning: ???iperm??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed0.f:386:0: Warning: ???igivpt??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed0.f:386:0: Warning: ???igivcl??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed2.f:439:0: DLAMDA( K ) = D( PJ ) Warning: ???pj??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed6.f:404:0: $ TAU = TAU*SCLINV Warning: ???sclinv??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaed8.f:481:0: W( K ) = Z( JLAM ) Warning: ???jlam??? may be used uninitialized in this function [-Wmaybe-uninitialized] slahqr.f:590:0: $ CS, SN ) Warning: ???i2??? may be used uninitialized in this function [-Wmaybe-uninitialized] slahr2.f:305:0: A( K+NB, NB ) = EI Warning: ???ei??? may be used uninitialized in this function [-Wmaybe-uninitialized] slahrd.f:282:0: A( K+NB, NB ) = EI Warning: ???ei??? may be used uninitialized in this function [-Wmaybe-uninitialized] slals0.f:410:0: $ DIFRJ ) / ( POLES( I, 2 )+DJ ) Warning: ???difrj??? may be used uninitialized in this function [-Wmaybe-uninitialized] slangb.f:149:0: REAL SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slange.f:138:0: REAL SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slangt.f:130:0: REAL ANORM, SCALE, SUM, TEMP Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] slanhs.f:132:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slansb.f:153:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slansf.f:234:0: REAL SCALE, S, VALUE, AA, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slansp.f:138:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slanst.f:124:0: REAL ANORM, SCALE, SUM Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] slansy.f:146:0: REAL ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slantb.f:165:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slantp.f:149:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slantr.f:166:0: REAL SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] slarrd.f:737:0: IF( W( JE ).GE.WUL .AND. IDISCU.GT.0 ) THEN Warning: ???wul??? may be used uninitialized in this function [-Wmaybe-uninitialized] slarrd.f:720:0: IF( W( JE ).LE.WLU .AND. IDISCL.GT.0 ) THEN Warning: ???wlu??? may be used uninitialized in this function [-Wmaybe-uninitialized] slarre.f:394:0: $ ((IRANGE.EQ.VALRNG).AND.(D(1).GT.VL).AND.(D(1).LE.VU)).OR. Warning: ???irange??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaruv.f:442:0: ISEED( 4 ) = IT4 Warning: ???it4??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaruv.f:441:0: ISEED( 3 ) = IT3 Warning: ???it3??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaruv.f:440:0: ISEED( 2 ) = IT2 Warning: ???it2??? may be used uninitialized in this function [-Wmaybe-uninitialized] slaruv.f:439:0: ISEED( 1 ) = IT1 Warning: ???it1??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasd2.f:443:0: J = J + 1 Warning: ???jprev??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasd7.f:447:0: J = J + 1 Warning: ???jprev??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasd8.f:325:0: $ / ( DSIGMA( I )+DJ ) Warning: ???difrj??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasq4.f:420:0: TAU = S Warning: ???s??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasv2.f:319:0: SSMAX = SIGN( SSMAX, TSIGN ) Warning: ???tsign??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasy2.f:427:0: $ CALL SSWAP( 4, T16( 1, JPSV ), 1, T16( 1, I ), 1 ) Warning: ???jpsv??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasy2.f:421:0: CALL SSWAP( 4, T16( IPSV, 1 ), 4, T16( I, 1 ), 4 ) Warning: ???ipsv??? may be used uninitialized in this function [-Wmaybe-uninitialized] slasyf_rook.f:698:0: ELSE IF( ( P.EQ.JMAX ) .OR. ( ROWMAX.LE.COLMAX ) ) Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] slatbs.f:794:0: X( J ) = X( J ) / TJJS - SUMJ Warning: ???tjjs??? may be used uninitialized in this function [-Wmaybe-uninitialized] slatps.f:775:0: X( J ) = X( J ) / TJJS - SUMJ Warning: ???tjjs??? may be used uninitialized in this function [-Wmaybe-uninitialized] slatrs.f:769:0: X( J ) = X( J ) / TJJS - SUMJ Warning: ???tjjs??? may be used uninitialized in this function [-Wmaybe-uninitialized] slazq4.f:325:0: TAU = S Warning: ???s??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:474:0: $ WORK(ITAUQ1), WORK(IORBDB), LORBDB, CHILDINFO ) Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:474:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:474:0: Warning: ???itaup1??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:486:0: $ WORK(IORGQR), LORGQR, CHILDINFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:461:0: LORGLQ = LWORK-IORGLQ+1 Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:659:0: CALL SCOPY( M-P, WORK(IORBDB+P), 1, U2, 1 ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: $ CHILDINFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd2by1.f:507:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:493:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???itauq2??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:493:0: Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:493:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:493:0: Warning: ???itaup1??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:546:0: $ WORK(IORGQR), LORGQRWORK, INFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:517:0: $ WORK(IORGLQ), LORGLQWORK, INFO ) Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:493:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: $ WORK(IB22E), WORK(IBBCSD), LBBCSDWORK, INFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorcsd.f:563:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorgbr.f:239:18: LWKOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sorgql.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sorgrq.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sormql.f:282:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sormrq.f:282:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] sormrz.f:303:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] spbsvx.f:432:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] spbsvx.f:432:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sporfsx.f:475:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sposvx.f:394:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sposvx.f:394:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sppsvx.f:394:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sppsvx.f:394:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] sspgvd.f:313:14: LWMIN = MAX( REAL( LWMIN ), REAL( WORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] sspgvd.f:314:15: LIWMIN = MAX( REAL( LIWMIN ), REAL( IWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] ssptrf.f:597:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] ssptrf.f:313:0: CALL SSWAP( KP-1, AP( KNC ), 1, AP( KPC ), 1 ) Warning: ???kpc??? may be used uninitialized in this function [-Wmaybe-uninitialized] sstedc.f:481:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] sstedc.f:482:0: IWORK( 1 ) = LIWMIN Warning: ???liwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] sstein.f:341:0: $ XJ = XJM + PERTOL Warning: ???xjm??? may be used uninitialized in this function [-Wmaybe-uninitialized] sstein.f:409:0: IF( NRM.LT.STPCRT ) Warning: ???stpcrt??? may be used uninitialized in this function [-Wmaybe-uninitialized] sstein.f:389:0: IF( ABS( XJ-XJM ).GT.ORTOL ) Warning: ???ortol??? may be used uninitialized in this function [-Wmaybe-uninitialized] sstein.f:375:0: $ SASUM( BLKSIZ, WORK( INDRV1+1 ), 1 ) Warning: ???onenrm??? may be used uninitialized in this function [-Wmaybe-uninitialized] sstein.f:392:0: DO 80 I = GPIND, J - 1 Warning: ???gpind??? may be used uninitialized in this function [-Wmaybe-uninitialized] ssyevx.f:541:0: WORK( 1 ) = LWKOPT Warning: ???lwkopt??? may be used uninitialized in this function [-Wmaybe-uninitialized] ssygvd.f:336:13: LOPT = MAX( REAL( LOPT ), REAL( WORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] ssygvd.f:337:14: LIOPT = MAX( REAL( LIOPT ), REAL( IWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] ssyrfsx.f:482:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] ssysv.f:229:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] ssysv_rook.f:262:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from REAL(4) to INTEGER(4) at (1) [-Wconversion] ssytf2.f:594:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] ssytf2_rook.f:630:0: ELSE IF( ( P.EQ.JMAX ).OR.( ROWMAX.LE.COLMAX ) ) THEN Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] ssytf2_rook.f:408:0: IF( KP.NE.KK ) THEN Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] stfttr.f:422:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] stfttr.f:327:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] stgsna.f:673:0: DIF( KS ) = COND Warning: ???cond??? may be used uninitialized in this function [-Wmaybe-uninitialized] stgsyl.f:606:0: SCALE = SCALE2 Warning: ???scale2??? may be used uninitialized in this function [-Wmaybe-uninitialized] stptri.f:229:0: $ AP( JCLAST ), AP( JC+1 ), 1 ) Warning: ???jclast??? may be used uninitialized in this function [-Wmaybe-uninitialized] strsen.f:564:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] strsen.f:565:0: IWORK( 1 ) = LIWMIN Warning: ???liwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] strttf.f:419:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] strttf.f:324:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] stzrzf.f:251:0: IWS = LDWORK*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgbrfsx.f:519:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgbsvx.f:485:0: COLCND = MAX( RCMIN, SMLNUM ) / MIN( RCMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgbsvx.f:485:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgebak.f:244:19: K = SCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgebak.f:258:19: K = SCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgebd2.f:250:21: D( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zgebd2.f:269:24: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zgebd2.f:294:21: D( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zgebd2.f:313:24: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zgees.f:288:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgeesx.f:343:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgeesx.f:272:0: $ ITAU, IWRK, LWRK, MAXWRK, MINWRK Warning: ???maxwrk??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgeev.f:280:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgeevx.f:404:21: HSWORK = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgegv.f:693:20: SBETA = ( SCALE*BETA( JC ) )*BNRM 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zgehrd.f:300:0: DO 40 I = ILO, IHI - 1 - NX, NB Warning: ???nx??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgelsd.f:529:0: ELSE IF( N.GE.MNTHR .AND. LWORK.GE.4*M+M*M+ Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgels.f:499:0: WORK( 1 ) = DBLE( WSIZE ) Warning: ???wsize??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgels.f:358:0: IF( .NOT.TPSD ) THEN Warning: ???tpsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgelss.f:272:28: LWORK_ZGEQRF=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:276:28: LWORK_ZUNMQR=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:290:28: LWORK_ZGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:294:28: LWORK_ZUNMBR=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:298:28: LWORK_ZUNGBR=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:316:31: LWORK_ZGELQF=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:320:31: LWORK_ZGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:324:31: LWORK_ZUNMBR=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:328:31: LWORK_ZUNGBR=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:332:31: LWORK_ZUNMLQ=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:351:31: LWORK_ZGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:355:31: LWORK_ZUNMBR=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:359:31: LWORK_ZUNGBR=DUM(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgelss.f:551:0: ELSE IF( N.GE.MNTHR .AND. LWORK.GE.3*M+M*M+MAX( M, NRHS, N-2*M ) ) Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgeqlf.f:215:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgerfsx.f:496:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgerqf.f:215:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgesvd.f:325:25: LWORK_ZGEQRF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:328:27: LWORK_ZUNGQR_N=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:330:27: LWORK_ZUNGQR_M=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:334:25: LWORK_ZGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:338:27: LWORK_ZUNGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:341:27: LWORK_ZUNGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:448:28: LWORK_ZGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:453:33: LWORK_ZUNGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:459:33: LWORK_ZUNGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:474:25: LWORK_ZGELQF=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:477:27: LWORK_ZUNGLQ_N=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:479:27: LWORK_ZUNGLQ_M=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:483:25: LWORK_ZGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:487:27: LWORK_ZUNGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:491:27: LWORK_ZUNGBR_Q=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:597:28: LWORK_ZGEBRD=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:603:32: LWORK_ZUNGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:609:32: LWORK_ZUNGBR_P=DUM(1) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zgesvd.f:3394:0: IF( LWORK.GE.WRKBL+LDA*M ) THEN Warning: ???wrkbl??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgesvd.f:2165:0: IF( N.GE.MNTHR ) THEN Warning: ???mnthr??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgesvx.f:458:0: COLCND = MAX( RCMIN, SMLNUM ) / MIN( RCMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgesvx.f:458:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgetc2.f:204:0: A( N, N ) = DCMPLX( SMIN, ZERO ) Warning: ???smin??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgetc2.f:186:0: $ CALL ZSWAP( N, A( 1, JPV ), 1, A( 1, I ), 1 ) Warning: ???jpv??? may be used uninitialized in this function [-Wmaybe-uninitialized] zgetc2.f:180:0: $ CALL ZSWAP( N, A( IPV, 1 ), LDA, A( I, 1 ), LDA ) Warning: ???ipv??? may be used uninitialized in this function [-Wmaybe-uninitialized] zggbak.f:259:19: K = RSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zggbak.f:269:19: K = RSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zggbak.f:283:19: K = LSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zggbak.f:293:19: K = LSCALE( I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zggbal.f:541:14: IR = LSCALE( I ) + SIGN( HALF, LSCALE( I ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zggbal.f:549:14: JC = RSCALE( I ) + SIGN( HALF, RSCALE( I ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zggglm.f:288:13: LOPT = WORK( M+NP+1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zgglse.f:282:13: LOPT = WORK( P+MN+1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zggqrf.f:282:13: LOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zggrqf.f:281:13: LOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zhbevd.f:326:18: W( 1 ) = AB( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhbev.f:226:21: W( 1 ) = AB( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhbev.f:228:21: W( 1 ) = AB( KD+1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhbevx.f:378:21: W( 1 ) = CTMP1 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhbgst.f:602:0: DO 240 J = N - 1, J2 + KA, -1 Warning: ???j2??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhbtrd.f:462:21: D( I ) = AB( KD1, I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhbtrd.f:669:21: D( I ) = AB( 1, I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheequb.f:263:16: AVG = AVG + S( I )*WORK( I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheequb.f:280:15: C1 = ( N-2 ) * ( WORK( I ) - T*SI ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheequb.f:281:15: C0 = -(T*SI)*SI + 2*WORK( I )*SI - N*AVG 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheequb.f:315:16: AVG = AVG + ( U + WORK( I ) ) * D / N 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheevd.f:320:18: W( 1 ) = A( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheev.f:227:18: W( 1 ) = A( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheevx.f:377:21: W( 1 ) = A( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheevx.f:382:24: W( 1 ) = A( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zheevx.f:551:0: WORK( 1 ) = LWKOPT Warning: ???lwkopt??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhegs2.f:199:21: AKK = A( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegs2.f:200:21: BKK = B( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegs2.f:229:21: AKK = A( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegs2.f:230:21: BKK = B( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegs2.f:254:21: AKK = A( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegs2.f:255:21: BKK = B( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegs2.f:274:21: AKK = A( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegs2.f:275:21: BKK = B( K, K ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhegvd.f:366:13: LOPT = MAX( DBLE( LOPT ), DBLE( WORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zhegvd.f:367:14: LROPT = MAX( DBLE( LROPT ), DBLE( RWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zhegvd.f:368:14: LIOPT = MAX( DBLE( LIOPT ), DBLE( IWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zherfsx.f:481:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zhetd2.f:251:21: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetd2.f:279:23: D( I+1 ) = A( I+1, I+1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetd2.f:282:18: D( 1 ) = A( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetd2.f:295:21: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetd2.f:324:21: D( I ) = A( I, I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetd2.f:327:18: D( N ) = A( N, N ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetf2.f:555:0: IF( KP.LT.N ) Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhetf2_rook.f:534:24: D11 = A( K, K ) / D 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetf2_rook.f:535:24: D22 = A( K-1, K-1 ) / D 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetf2_rook.f:691:0: ELSE IF( ( P.EQ.JMAX ) .OR. ( ROWMAX.LE.COLMAX ) ) Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhetrd.f:331:24: D( J ) = A( J, J ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhetrd.f:363:24: D( J ) = A( J, J ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhgeqz.f:855:0: IF( ILZ ) Warning: ???ilz??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhgeqz.f:797:0: IF( ILQ ) THEN Warning: ???ilq??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhgeqz.f:689:0: IF( .NOT.ILSCHR ) THEN Warning: ???ilschr??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhpevd.f:307:18: W( 1 ) = AP( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpev.f:209:18: W( 1 ) = AP( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpevx.f:331:21: W( 1 ) = AP( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpevx.f:335:24: W( 1 ) = AP( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgst.f:188:21: BJJ = BP( JJ ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgst.f:209:21: AKK = AP( KK ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgst.f:210:21: BKK = BP( KK ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgst.f:240:21: AKK = AP( KK ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgst.f:241:21: BKK = BP( KK ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgst.f:264:21: AJJ = AP( JJ ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgst.f:265:21: BJJ = BP( JJ ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhpgvd.f:341:14: LWMIN = MAX( DBLE( LWMIN ), DBLE( WORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zhpgvd.f:342:15: LRWMIN = MAX( DBLE( LRWMIN ), DBLE( RWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zhpgvd.f:343:15: LIWMIN = MAX( DBLE( LIWMIN ), DBLE( IWORK( 1 ) ) ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zhptrd.f:227:21: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhptrd.f:252:23: D( I+1 ) = AP( I1+I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhptrd.f:256:18: D( 1 ) = AP( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhptrd.f:272:21: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhptrd.f:299:21: D( I ) = AP( II ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhptrd.f:303:18: D( N ) = AP( II ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zhptrf.f:529:0: IF( KP.LT.N ) Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zhptrf.f:328:0: CALL ZSWAP( KP-1, AP( KNC ), 1, AP( KPC ), 1 ) Warning: ???kpc??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlabrd.f:273:21: D( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlabrd.f:313:24: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlabrd.f:357:21: D( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlabrd.f:391:24: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaed8.f:455:0: W( K ) = Z( JLAM ) Warning: ???jlam??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlag2c.f:148:25: SA( I, J ) = A( I, J ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to COMPLEX(4) at (1) [-Wconversion] zlahef.f:694:0: JMAX = IMAX + IZAMAX( N-IMAX, W( IMAX+1, K+1 ), 1 ) Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlahef_rook.f:860:0: ELSE IF( ( P.EQ.JMAX ) .OR. ( ROWMAX.LE.COLMAX ) ) Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlahr2.f:307:0: A( K+NB, NB ) = EI Warning: ???IMAGPART_EXPR ??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlahr2.f:307:0: Warning: ???REALPART_EXPR ??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlaic1.f:197:21: TMP = SQRT( S*DCONJG( S )+C*DCONJG( C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:251:19: T = C / ( B+SQRT( B*B+C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:253:19: T = SQRT( B*B+C ) - B 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:258:18: TMP = SQRT( SINE*DCONJG( SINE )+COSINE*DCONJG( COSINE ) ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:283:18: TMP = SQRT( S*DCONJG( S )+C*DCONJG( C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:341:19: T = C / ( B+SQRT( ABS( B*B-C ) ) ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:352:22: T = -C / ( B+SQRT( B*B+C ) ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:354:22: T = B - SQRT( B*B+C ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlaic1.f:360:18: TMP = SQRT( SINE*DCONJG( SINE )+COSINE*DCONJG( COSINE ) ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zla_lin_berr.f:151:21: TMP = (SAFE1 + CABS1(RES(I,J)))/AYB(I,J) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlals0.f:413:0: $ DIFRJ ) / ( POLES( I, 2 )+DJ ) Warning: ???difrj??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlangb.f:150:0: DOUBLE PRECISION SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlange.f:140:0: DOUBLE PRECISION SCALE, SUM, VALUE, TEMP Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlangt.f:130:0: DOUBLE PRECISION ANORM, SCALE, SUM, TEMP Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlanhb.f:157:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlanhe.f:149:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlanhp.f:142:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlanhs.f:134:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlanht.f:126:0: DOUBLE PRECISION ANORM, SCALE, SUM Warning: ???anorm??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlansb.f:155:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlansp.f:140:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlansy.f:148:0: DOUBLE PRECISION ABSA, SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlantb.f:167:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlantp.f:151:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlantr.f:168:0: DOUBLE PRECISION SCALE, SUM, VALUE Warning: ???value??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlarfgp.f:241:25: BETA = -SAVEALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlasyf_rook.f:706:0: ELSE IF( ( P.EQ.JMAX ) .OR. ( ROWMAX.LE.COLMAX ) ) Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zlat2c.f:157:28: SA( I, J ) = A( I, J ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to COMPLEX(4) at (1) [-Wconversion] zlat2c.f:170:28: SA( I, J ) = A( I, J ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to COMPLEX(4) at (1) [-Wconversion] zlatrd.f:274:26: E( I-1 ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlatrd.f:328:24: E( I ) = ALPHA 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlauu2.f:168:18: AII = A( I, I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zlauu2.f:187:18: AII = A( I, I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zpbsvx.f:430:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zpbsvx.f:430:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zpoequb.f:180:15: S( 1 ) = A( 1, 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zpoequb.f:184:18: S( I ) = A( I, I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zporfsx.f:473:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zposvx.f:392:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zposvx.f:392:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zpotf2.f:180:18: AJJ = DBLE( A( J, J ) ) - ZDOTC( J-1, A( 1, J ), 1, 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zpotf2.f:207:18: AJJ = DBLE( A( J, J ) ) - ZDOTC( J-1, A( J, 1 ), LDA, 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zppsvx.f:393:0: SCOND = MAX( SMIN, SMLNUM ) / MIN( SMAX, BIGNUM ) Warning: ???smlnum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zppsvx.f:393:0: Warning: ???bignum??? may be used uninitialized in this function [-Wmaybe-uninitialized] zpptrf.f:195:18: AJJ = DBLE( AP( JJ ) ) - ZDOTC( J-1, AP( JC ), 1, AP( JC ), 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zpptri.f:167:18: AJJ = AP( JJ ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zspr.f:220:0: JX = KX Warning: ???kx??? may be used uninitialized in this function [-Wmaybe-uninitialized] zsptrf.f:602:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zsptrf.f:322:0: CALL ZSWAP( KP-1, AP( KNC ), 1, AP( KPC ), 1 ) Warning: ???kpc??? may be used uninitialized in this function [-Wmaybe-uninitialized] zstedc.f:485:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] zstedc.f:486:0: RWORK( 1 ) = LRWMIN Warning: ???lrwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] zstedc.f:487:0: IWORK( 1 ) = LIWMIN Warning: ???liwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] zstein.f:352:0: $ XJ = XJM + PERTOL Warning: ???xjm??? may be used uninitialized in this function [-Wmaybe-uninitialized] zstein.f:400:0: IF( ABS( XJ-XJM ).GT.ORTOL ) Warning: ???ortol??? may be used uninitialized in this function [-Wmaybe-uninitialized] zstein.f:386:0: $ DASUM( BLKSIZ, WORK( INDRV1+1 ), 1 ) Warning: ???onenrm??? may be used uninitialized in this function [-Wmaybe-uninitialized] zstein.f:425:0: IF( NRM.LT.DTPCRT ) Warning: ???dtpcrt??? may be used uninitialized in this function [-Wmaybe-uninitialized] zsyequb.f:275:16: AVG = AVG + S( I )*WORK( I ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zsyequb.f:292:15: C1 = ( N-2 ) * ( WORK( I ) - T*SI ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zsyequb.f:293:15: C0 = -(T*SI)*SI + 2*WORK( I )*SI - N*AVG 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zsyequb.f:327:16: AVG = AVG + ( U + WORK( I ) ) * D / N 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) [-Wconversion] zsyr.f:226:0: IX = KX Warning: ???kx??? may be used uninitialized in this function [-Wmaybe-uninitialized] zsyrfsx.f:483:23: REF_TYPE = PARAMS( LA_LINRX_ITREF_I ) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) [-Wconversion] zsysv.f:229:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zsysv_rook.f:262:21: LWKOPT = WORK(1) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zsytf2.f:595:0: IPIV( K ) = -KP Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zsytf2_rook.f:638:0: ELSE IF( ( P.EQ.JMAX ).OR.( ROWMAX.LE.COLMAX ) ) THEN Warning: ???jmax??? may be used uninitialized in this function [-Wmaybe-uninitialized] zsytf2_rook.f:416:0: IF( KP.NE.KK ) THEN Warning: ???imax??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztfttr.f:459:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztfttr.f:356:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztgsyl.f:611:0: SCALE = SCALE2 Warning: ???scale2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztptri.f:230:0: $ AP( JCLAST ), AP( JC+1 ), 1 ) Warning: ???jclast??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztrsen.f:450:0: WORK( 1 ) = LWMIN Warning: ???lwmin??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztrttf.f:458:0: IJ = IJ - NP1X2 Warning: ???np1x2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztrttf.f:355:0: IJ = IJ - NX2 Warning: ???nx2??? may be used uninitialized in this function [-Wmaybe-uninitialized] ztzrzf.f:251:0: IWS = LDWORK*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:519:0: $ WORK(ITAUQ1), WORK(IORBDB), LORBDB, CHILDINFO ) Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:519:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:531:0: $ WORK(IORGQR), LORGQR, CHILDINFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:506:0: LORGLQ = LWORK-IORGLQ+1 Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:704:0: CALL ZCOPY( M-P, WORK(IORBDB+P), 1, U2, 1 ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: $ CHILDINFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd2by1.f:552:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:526:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???itauq2??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:526:0: Warning: ???itauq1??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:526:0: Warning: ???itaup2??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:579:0: $ WORK(IORGQR), LORGQRWORK, INFO ) Warning: ???iorgqr??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:550:0: $ WORK(IORGLQ), LORGLQWORK, INFO ) Warning: ???iorglq??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:526:0: $ WORK(IORBDB), LORBDBWORK, CHILDINFO ) Warning: ???iorbdb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: $ LBBCSDWORK, INFO ) Warning: ???ibbcsd??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib22e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib22d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib21e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib21d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib12e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib12d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib11e??? may be used uninitialized in this function [-Wmaybe-uninitialized] zuncsd.f:597:0: Warning: ???ib11d??? may be used uninitialized in this function [-Wmaybe-uninitialized] zungbr.f:240:18: LWKOPT = WORK( 1 ) 1 Warning: Possible change of value in conversion from COMPLEX(8) to INTEGER(4) at (1) [-Wconversion] zungql.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zungrq.f:210:0: IF( NB.GT.1 .AND. NB.LT.K ) THEN Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zunmql.f:279:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] zunmrq.f:280:0: IWS = NW*NB Warning: ???nb??? may be used uninitialized in this function [-Wmaybe-uninitialized] ********End of Output of running make on FBLASLAPACK ******* Not checking for library in Download FBLASLAPACK: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] No functions to check for in library [] [] Checking for headers Download FBLASLAPACK: ['/home/wangzl/moose-compilers/petsc-3.11.4/include', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include/gfortran/7.1.0', '/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/include'] ================================================================================ TEST checkSharedLibrary from config.packages.fblaslapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.fblaslapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST alternateConfigureLibrary from config.packages.f2cblaslapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.f2cblaslapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST checkDependencies from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:356) TESTING: configureLibrary from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:356) ================================================================================ Checking for a functional BLAS and LAPACK in fblaslapack ================================================================================ TEST checkLib from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:112) TESTING: checkLib from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:112) Checking for BLAS and LAPACK symbols Checking for functions [ddot_] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a'] ['libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char ddot_(); static void _check_ddot_() { ddot_(); } int main() { _check_ddot_();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBFBLAS" to "1" Checking for functions [dgetrs_] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a'] ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char dgetrs_(); static void _check_dgetrs_() { dgetrs_(); } int main() { _check_dgetrs_();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBFLAPACK" to "1" Checking for functions [dgeev_] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a'] ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char dgeev_(); static void _check_dgeev_() { dgeev_(); } int main() { _check_dgeev_();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBFLAPACK" to "1" Found Fortran mangling on BLAS/LAPACK which is underscore Defined "BLASLAPACK_UNDERSCORE" to "1" ================================================================================ TEST checkESSL from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:436) TESTING: checkESSL from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:436) Check for the IBM ESSL library Checking for functions [iessl] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char iessl(); static void _check_iessl() { iessl(); } int main() { _check_iessl();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_iessl': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `iessl' collect2: error: ld returned 1 exit status ================================================================================ TEST checkPESSL from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:477) TESTING: checkPESSL from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:477) Check for the IBM PESSL library - and error out - if used instead of ESSL Checking for functions [ipessl] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char ipessl(); static void _check_ipessl() { ipessl(); } int main() { _check_ipessl();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_ipessl': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `ipessl' collect2: error: ld returned 1 exit status ================================================================================ TEST checkMKL from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:444) TESTING: checkMKL from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:444) Check for Intel MKL library Checking for functions [mkl_set_num_threads] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char mkl_set_num_threads(); static void _check_mkl_set_num_threads() { mkl_set_num_threads(); } int main() { _check_mkl_set_num_threads();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_mkl_set_num_threads': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `mkl_set_num_threads' collect2: error: ld returned 1 exit status ================================================================================ TEST checkMissing from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:502) TESTING: checkMissing from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:502) Check for missing LAPACK routines Checking for functions [dgeev_ dgels_ dgelss_ dgeqrf_ dgerfs_ dgesv_ dgesvd_ dgetrf_ dgetri_ dgetrs_ dgges_ dhgeqz_ dhseqr_ dormqr_ dpotrf_ dpotri_ dpotrs_ dpttrf_ dpttrs_ dstebz_ dstein_ dsteqr_ dsyev_ dsyevx_ dsygvx_ dsytrf_ dsytri_ dsytrs_ dtgsen_ dtrsen_ dtrtrs_ dorgqr_] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a'] ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char dgeev_(); static void _check_dgeev_() { dgeev_(); } char dgels_(); static void _check_dgels_() { dgels_(); } char dgelss_(); static void _check_dgelss_() { dgelss_(); } char dgeqrf_(); static void _check_dgeqrf_() { dgeqrf_(); } char dgerfs_(); static void _check_dgerfs_() { dgerfs_(); } char dgesv_(); static void _check_dgesv_() { dgesv_(); } char dgesvd_(); static void _check_dgesvd_() { dgesvd_(); } char dgetrf_(); static void _check_dgetrf_() { dgetrf_(); } char dgetri_(); static void _check_dgetri_() { dgetri_(); } char dgetrs_(); static void _check_dgetrs_() { dgetrs_(); } char dgges_(); static void _check_dgges_() { dgges_(); } char dhgeqz_(); static void _check_dhgeqz_() { dhgeqz_(); } char dhseqr_(); static void _check_dhseqr_() { dhseqr_(); } char dormqr_(); static void _check_dormqr_() { dormqr_(); } char dpotrf_(); static void _check_dpotrf_() { dpotrf_(); } char dpotri_(); static void _check_dpotri_() { dpotri_(); } char dpotrs_(); static void _check_dpotrs_() { dpotrs_(); } char dpttrf_(); static void _check_dpttrf_() { dpttrf_(); } char dpttrs_(); static void _check_dpttrs_() { dpttrs_(); } char dstebz_(); static void _check_dstebz_() { dstebz_(); } char dstein_(); static void _check_dstein_() { dstein_(); } char dsteqr_(); static void _check_dsteqr_() { dsteqr_(); } char dsyev_(); static void _check_dsyev_() { dsyev_(); } char dsyevx_(); static void _check_dsyevx_() { dsyevx_(); } char dsygvx_(); static void _check_dsygvx_() { dsygvx_(); } char dsytrf_(); static void _check_dsytrf_() { dsytrf_(); } char dsytri_(); static void _check_dsytri_() { dsytri_(); } char dsytrs_(); static void _check_dsytrs_() { dsytrs_(); } char dtgsen_(); static void _check_dtgsen_() { dtgsen_(); } char dtrsen_(); static void _check_dtrsen_() { dtrsen_(); } char dtrtrs_(); static void _check_dtrtrs_() { dtrtrs_(); } char dorgqr_(); static void _check_dorgqr_() { dorgqr_(); } int main() { _check_dgeev_(); _check_dgels_(); _check_dgelss_(); _check_dgeqrf_(); _check_dgerfs_(); _check_dgesv_(); _check_dgesvd_(); _check_dgetrf_(); _check_dgetri_(); _check_dgetrs_(); _check_dgges_(); _check_dhgeqz_(); _check_dhseqr_(); _check_dormqr_(); _check_dpotrf_(); _check_dpotri_(); _check_dpotrs_(); _check_dpttrf_(); _check_dpttrs_(); _check_dstebz_(); _check_dstein_(); _check_dsteqr_(); _check_dsyev_(); _check_dsyevx_(); _check_dsygvx_(); _check_dsytrf_(); _check_dsytri_(); _check_dsytrs_(); _check_dtgsen_(); _check_dtrsen_(); _check_dtrtrs_(); _check_dorgqr_();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBFLAPACK" to "1" ================================================================================ TEST checklsame from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:520) TESTING: checklsame from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:520) Do the BLAS/LAPACK libraries have a valid lsame() function with correction binding. Lion and xcode 4.2 do not Checking for functions [lsame_] in library ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] [] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char lsame_(); static void _check_lsame_() { lsame_(); } int main() { _check_lsame_();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Defined "HAVE_LIBFLAPACK" to "1" Defined "HAVE_LIBFBLAS" to "1" Defined "HAVE_LIBM" to "1" Defined "HAVE_LIBSTDC__" to "1" Defined "HAVE_LIBDL" to "1" Defined "HAVE_LIBMPIFORT" to "1" Defined "HAVE_LIBMPI" to "1" Defined "HAVE_LIBRT" to "1" Defined "HAVE_LIBPTHREAD" to "1" Defined "HAVE_LIBGFORTRAN" to "1" Defined "HAVE_LIBM" to "1" Defined "HAVE_LIBGFORTRAN" to "1" Defined "HAVE_LIBM" to "1" Defined "HAVE_LIBGCC_S" to "1" Defined "HAVE_LIBQUADMATH" to "1" Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" char *dgeev_(void); char* testroutine(void){return dgeev_(); }Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" char *dgeev_(void); char* testroutine(void){return dgeev_(); }Executing: mpicc -o /tmp/petsc-wjcu960y/config.setCompilers/libconftest.so -shared -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.setCompilers/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -lflapack -lfblas -lflapack -lfblas -lm -lstdc++ -ldl -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl ================================================================================ TEST checkRuntimeIssues from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:579) TESTING: checkRuntimeIssues from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/BlasLapack.py:579) Determines if BLAS/LAPACK routines use 32 or 64 bit integers Checking if BLAS/LAPACK routines use 32 or 64 bit integers All intermediate test results are stored in /tmp/petsc-wjcu960y/config.packages.BlasLapack Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.packages.BlasLapack/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.packages.BlasLapack -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.packages.BlasLapack/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #if STDC_HEADERS #include #include #include #endif int main() { FILE *output = fopen("runtimetestoutput","w"); extern double ddot_(const int*,const double*,const int *,const double*,const int*); double x1mkl[4] = {3.0,5.0,7.0,9.0}; int one1mkl = 1,nmkl = 2; double dotresultmkl = 0; dotresultmkl = ddot_(&nmkl,x1mkl,&one1mkl,x1mkl,&one1mkl); fprintf(output, "-known-64-bit-blas-indices=%d",dotresultmkl != 34);; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.packages.BlasLapack/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.packages.BlasLapack/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Testing executable /tmp/petsc-wjcu960y/config.packages.BlasLapack/conftest to see if it can be run Executing: /tmp/petsc-wjcu960y/config.packages.BlasLapack/conftest Executing: /tmp/petsc-wjcu960y/config.packages.BlasLapack/conftest Checking for 64 bit blas indices: result 0 ================================================================================ TEST checkSharedLibrary from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.BlasLapack(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST alternateConfigureLibrary from config.packages.sundials(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.sundials(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.spai(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.spai(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.pARMS(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.pARMS(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.p4est(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.p4est(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST checkDependencies from config.packages.mkl_sparse_optimize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.mkl_sparse_optimize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.mkl_sparse_optimize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/mkl_sparse_optimize.py:65) TESTING: configureLibrary from config.packages.mkl_sparse_optimize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/packages/mkl_sparse_optimize.py:65) ================================================================================== Checking for a functional mkl_sparse_optimize Checking for library in Compiler specific search MKL_SPARSE_OPTIMIZE: [] ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [mkl_sparse_optimize mkl_sparse_s_create_bsr] in library [] ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.packages.BlasLapack -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char mkl_sparse_optimize(); static void _check_mkl_sparse_optimize() { mkl_sparse_optimize(); } char mkl_sparse_s_create_bsr(); static void _check_mkl_sparse_s_create_bsr() { mkl_sparse_s_create_bsr(); } int main() { _check_mkl_sparse_optimize(); _check_mkl_sparse_s_create_bsr();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_mkl_sparse_optimize': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `mkl_sparse_optimize' /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_mkl_sparse_s_create_bsr': /tmp/petsc-wjcu960y/config.libraries/conftest.c:7: undefined reference to `mkl_sparse_s_create_bsr' collect2: error: ld returned 1 exit status ================================================================================ TEST checkSharedLibrary from config.packages.mkl_sparse_optimize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.mkl_sparse_optimize(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST checkDependencies from config.packages.mkl_sparse(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.mkl_sparse(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.mkl_sparse(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) TESTING: configureLibrary from config.packages.mkl_sparse(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional mkl_sparse Checking for library in Compiler specific search MKL_SPARSE: [] ================================================================================ TEST check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) TESTING: check from config.libraries(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/libraries.py:154) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [mkl_dcsrmv] in library [] ['/home/wangzl/moose-compilers/petsc-3.11.4/lib/libflapack.a', '/home/wangzl/moose-compilers/petsc-3.11.4/lib/libfblas.a', 'libm.a', '-lstdc++', '-ldl', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lmpifort', '-lmpi', '-lrt', '-lpthread', '-lgfortran', '-lm', '-Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7', '-L/usr/lib64/gcc/x86_64-suse-linux/7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4', '-Wl,-rpath,/usr/x86_64-suse-linux/lib', '-L/usr/x86_64-suse-linux/lib', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt', '-Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] Executing: mpicc -c -o /tmp/petsc-wjcu960y/config.libraries/conftest.o -I/tmp/petsc-wjcu960y/config.compilers -I/tmp/petsc-wjcu960y/config.utilities.closure -I/tmp/petsc-wjcu960y/config.headers -I/tmp/petsc-wjcu960y/config.utilities.cacheDetails -I/tmp/petsc-wjcu960y/config.atomics -I/tmp/petsc-wjcu960y/config.functions -I/tmp/petsc-wjcu960y/config.utilities.featureTestMacros -I/tmp/petsc-wjcu960y/config.utilities.missing -I/tmp/petsc-wjcu960y/PETSc.options.scalarTypes -I/tmp/petsc-wjcu960y/config.packages.MPI -I/tmp/petsc-wjcu960y/config.types -I/tmp/petsc-wjcu960y/config.packages.pthread -I/tmp/petsc-wjcu960y/config.packages.metis -I/tmp/petsc-wjcu960y/config.packages.BlasLapack -I/tmp/petsc-wjcu960y/config.setCompilers -I/tmp/petsc-wjcu960y/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char mkl_dcsrmv(); static void _check_mkl_dcsrmv() { mkl_dcsrmv(); } int main() { _check_mkl_dcsrmv();; return 0; } Executing: mpicc -o /tmp/petsc-wjcu960y/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fopenmp /tmp/petsc-wjcu960y/config.libraries/conftest.o -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lflapack -Wl,-rpath,/home/wangzl/moose-compilers/petsc-3.11.4/lib -L/home/wangzl/moose-compilers/petsc-3.11.4/lib -lfblas -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/7 -L/usr/lib64/gcc/x86_64-suse-linux/7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mkl/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/ipp/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.5.281/linux/daal/lib/intel64_lin -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -L/opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib -lgfortran -lm -lgcc_s -lquadmath -lrt -lm -lpthread -lz -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /tmp/petsc-wjcu960y/config.libraries/conftest.o: in function `_check_mkl_dcsrmv': /tmp/petsc-wjcu960y/config.libraries/conftest.c:5: undefined reference to `mkl_dcsrmv' collect2: error: ld returned 1 exit status ================================================================================ TEST checkSharedLibrary from config.packages.mkl_sparse(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) TESTING: checkSharedLibrary from config.packages.mkl_sparse(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:817) By default we don't care about checking if the library is shared ================================================================================ TEST alternateConfigureLibrary from config.packages.mkl_cpardiso(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.mkl_cpardiso(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.fftw(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.fftw(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.elemental(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.elemental(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.ml(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.ml(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from config.packages.mkl_pardiso(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) TESTING: alternateConfigureLibrary from config.packages.mkl_pardiso(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:821) Called if --with-packagename=0; does nothing by default ================================================================================ TEST checkDependencies from config.packages.SuperLU_DIST(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) TESTING: checkDependencies from config.packages.SuperLU_DIST(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:738) ================================================================================ TEST configureLibrary from config.packages.SuperLU_DIST(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) TESTING: configureLibrary from config.packages.SuperLU_DIST(/tmp/stack_temp.rFVgkc/petsc-3.11.4/config/BuildSystem/config/package.py:763) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional SuperLU_DIST Looking for SUPERLU_DIST at git.superlu_dist, hg.superlu_dist or a directory starting with ['SuperLU_DIST', 'superlu_dist'] Could not locate an existing copy of SUPERLU_DIST: ['git.slepc', 'petsc-pkg-scotch-c15036faac5f', 'petsc-pkg-metis-49e61501c498', 'petsc-pkg-parmetis-73dab469aa36', 'fblaslapack-3.4.2'] Downloading SuperLU_DIST =============================================================================== Trying to download file:///home/wangzl/packages/superlu_dist-6.1.1.tar.gz for SUPERLU_DIST =============================================================================== Downloading file:///home/wangzl/packages/superlu_dist-6.1.1.tar.gz to /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/_d_superlu_dist-6.1.1.tar.gz Extracting /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/_d_superlu_dist-6.1.1.tar.gz Executing: cd /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages; chmod -R a+r superlu_dist-6.1.1;find superlu_dist-6.1.1 -type d -name "*" -exec chmod a+rx {} \; Looking for SUPERLU_DIST at git.superlu_dist, hg.superlu_dist or a directory starting with ['SuperLU_DIST', 'superlu_dist'] Found a copy of SUPERLU_DIST in superlu_dist-6.1.1 Have to rebuild SUPERLU_DIST, /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/externalpackages/superlu_dist-6.1.1/superlu_dist.petscconf != /tmp/stack_temp.rFVgkc/petsc-3.11.4/linux-opt/lib/petsc/conf/pkg.conf.superlu_dist =============================================================================== Configuring SUPERLU_DIST with cmake, this may take several minutes =============================================================================== From knepley at gmail.com Wed Apr 22 18:00:10 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 22 Apr 2020 19:00:10 -0400 Subject: [petsc-users] Fwd: Installation problem when Configuring SUPERLU_DIST In-Reply-To: References: <1fa97190-389c-4c2a-b23d-8a741ba12651@googlegroups.com> Message-ID: On Wed, Apr 22, 2020 at 6:48 PM Fande Kong wrote: > > We did not get a stack back this time. > > > Let us ping PETSc guys. > How did this die? If it got a SIGINT, Python should catch it and at least give us the stack. Is CMake segfaulting? Matt > Thanks, > > Fande, > > > ---------- Forwarded message --------- > From: Zile Wang > Date: Tue, Apr 21, 2020 at 10:55 AM > Subject: Re: Installation problem when Configuring SUPERLU_DIST > To: moose-users > > > Thanks, Fande. > > I removed the packages. However, the same problem. It seems that "*Configuring > SUPERLU_DIST" *does not work. There is no error message in the > configure.log file. > > Am I missing something? > > Zi-Le Wang > > -- > You received this message because you are subscribed to the Google Groups > "moose-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/moose-users/1fa97190-389c-4c2a-b23d-8a741ba12651%40googlegroups.com > > . > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbuerkle at web.de Wed Apr 22 19:42:22 2020 From: mbuerkle at web.de (Marius Buerkle) Date: Thu, 23 Apr 2020 02:42:22 +0200 Subject: [petsc-users] PetscObjectGetComm In-Reply-To: References: <70717E72-B041-47CE-B259-6DAAE142B326@dsic.upv.es> Message-ID: An HTML attachment was scrubbed... URL: From Antoine.Cote3 at USherbrooke.ca Thu Apr 23 11:02:25 2020 From: Antoine.Cote3 at USherbrooke.ca (=?iso-8859-1?Q?Antoine_C=F4t=E9?=) Date: Thu, 23 Apr 2020 16:02:25 +0000 Subject: [petsc-users] Vec sizing using DMDA In-Reply-To: References: Message-ID: Hi, I'm using a C++/PETSc program to do Topological Optimization. A finite element analysis is solved at every iteration of the optimization. Displacements U are obtained using KSP solver. U is a Vec created using a 3D DMDA with 3 DOF (ux, uy, uz). Boundary conditions are stored in Vec N, and forces in Vec RHS. They also have 3 DOF, as they are created using VecDuplicate on U. My problem : I have multiple load cases (i.e. different sets of boundary conditions (b.c.) and forces). Displacements U are solved for each load case. I need to extract rapidly the b.c. and forces for each load case before solving. One way would be to change the DOF of the DMDA (e.g. for 8 load cases, we could use 3*8=24 DOF). Problem is, prior solving, we would need to loop on nodes to extract the b.c. and forces, for every node, for every load case and for every iteration of the optimization. This is a waste of time, as b.c. and forces are constant for a given load case. A better way would be to assemble b.c. and forces for every load case once, and read them afterwards as needed. This is currently done using a VecDuplicate on U to create multiple vectors N and RHS (N_0, N_1, RHS_0, RHS_1, etc.). Those vectors are hard coded, and can only solve a set number of load cases. I'm looking for a way to allocate dynamically the number of N and RHS vectors. What I would like : Given nlc, the number of load cases and nn, the number of nodes in the DMDA. Create matrices N and RHS of size (DOF*nn lines, nlc columns). While optimizing : for every load case, use N[all lines, current load case column] and RHS[all lines, current load case column], solve with KSP, obtain displacement U[all lines, current load case]. Would that be possible? Best regards, Antoine C?t? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 23 14:23:22 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 23 Apr 2020 15:23:22 -0400 Subject: [petsc-users] Vec sizing using DMDA In-Reply-To: References: Message-ID: On Thu, Apr 23, 2020 at 12:12 PM Antoine C?t? wrote: > Hi, > > I'm using a C++/PETSc program to do Topological Optimization. A finite > element analysis is solved at every iteration of the optimization. > Displacements U are obtained using KSP solver. U is a Vec created using a > 3D DMDA with 3 DOF (ux, uy, uz). Boundary conditions are stored in Vec N, > and forces in Vec RHS. They also have 3 DOF, as they are created using > VecDuplicate on U. > > My problem : I have multiple load cases (i.e. different sets of boundary > conditions (b.c.) and forces). Displacements U are solved for each load > case. I need to extract rapidly the b.c. and forces for each load case > before solving. > > One way would be to change the DOF of the DMDA (e.g. for 8 load cases, we > could use 3*8=24 DOF). Problem is, prior solving, we would need to loop on > nodes to extract the b.c. and forces, for every node, for every load case > and for every iteration of the optimization. This is a waste of time, as > b.c. and forces are constant for a given load case. > > A better way would be to assemble b.c. and forces for every load case > once, and read them afterwards as needed. This is currently done using a > VecDuplicate on U to create multiple vectors N and RHS (N_0, N_1, RHS_0, > RHS_1, etc.). Those vectors are hard coded, and can only solve a set number > of load cases. > > I'm looking for a way to allocate dynamically the number of N and RHS > vectors. What I would like : > Given nlc, the number of load cases and nn, the number of nodes in the > DMDA. Create matrices N and RHS of size (DOF*nn lines, nlc columns). While > optimizing : for every load case, use N[all lines, current load case > column] and RHS[all lines, current load case column], solve with KSP, > obtain displacement U[all lines, current load case]. > > Would that be possible? > Why wouldn't you just allocate an array of Vecs, since you only use one at a time? Thanks, Matt > Best regards, > > Antoine C?t? > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldo.bonfiglioli at unibas.it Fri Apr 24 03:33:13 2020 From: aldo.bonfiglioli at unibas.it (Aldo Bonfiglioli) Date: Fri, 24 Apr 2020 10:33:13 +0200 Subject: [petsc-users] makefile changes since release 12 Message-ID: Hi there, the makefile I have been using for ages (up to 11.4) now fails with 12.5. I noticed that there have been several changes in include $(PETSC_DIR)/lib/petsc/conf/variables include $(PETSC_DIR)/lib/petsc/conf/rules btw. the two aforementioned versions. If I'm not wrong, *.F files should now be compiled with: > > .F.o .F90.o .F95.o: > ??????? ${PETSC_FCOMPILE} -o $@ $< However,? in ${PETSC_FCOMPILE} there are also my ${SOURCEF} fortran sources, so that I get the following compilation error: > gfortran -c -fPIC -Wall -ffree-line-length-0 > -Wno-unused-dummy-argument -g -I../../include/ -I. > -I/home/abonfi/src/petsc-3.12.5/include > -I/home/abonfi/src/petsc-3.12.5/linux_gnu/include???? getidx.f > ApplicationFunction.F ApplicationFunction_t.F bndry_iset.F > JacobianBoundaryConditions.F RHSFunction.F RHSFunction_t.F > RHSJacobian.F RHSJacobian_t.F blockdata.f bndvflx.F clearmem.F > lhsbc5.F lhsbc6.F exgeo.F newgeo.F ghost.F ghost2.F init.F iset.F > iset_t.F main.F matsch.F MotionSolver.F myTS.F nodres.F nodres_t.F > noname.f printmat2.F printmat.F printmatmm.F qb.F rdat.F readat.F > rgrdpts.F rhsbc1.F rhsbc4.F rhsbc5.F rhsbc5c.F sclsch.F > setbc4laplace.F setibc.F seterr.F setupRHS.F setupRHS_t.F setupLHS_b.F > solzne.F MatAllocaSeq.F test.F tmodel.F turbini.F turbsch.F update2.F > update3.F update4.F weakbc.F -o ApplicationFunction_t.o > ApplicationFunction_t.F > gfortran: fatal error: cannot specify ?-o? with ?-c?, ?-S? or ?-E? > with multiple files > compilation terminated. If I remove ${SOURCEF} from the ${PETSC_FCOMPILE} definition in $(PETSC_DIR)/lib/petsc/conf/variables things work, but I am not sure that this is the right thing to do. Thanks, Aldo PS My makefile is attached -- Dr. Aldo Bonfiglioli Associate professor of Fluid Machines Scuola di Ingegneria Universita' della Basilicata V.le dell'Ateneo lucano, 10 85100 Potenza ITALY tel:+39.0971.205203 fax:+39.0971.205215 web: http://docenti.unibas.it/site/home/docente.html?m=002423 -------------- next part -------------- VERSION = 3.12.0 PROGRAM = eulfs$(VERSION)-$(PETSC_ARCH) all: $(PROGRAM) FFLAGS = -I../../include/ -I. FCPPFLAGS = $(FCPPFLAGS) -I../../include/ -I. CFLAGS = # # ad hoc fix for SP3 # #FCPPFLAGS = ${PETSC_INCLUDE} ${PCONF} ${PETSCFLAGS} ${PETSC_PARCH} \ ${FPPFLAGS} -I$(FSPL_DIR)/include/ include $(PETSC_DIR)/lib/petsc/conf/variables include $(PETSC_DIR)/lib/petsc/conf/rules DEST = $(HOME)/bin/$(PETSC_ARCH) INSTALL = cp SOURCEC = SOURCEF = \ getidx.f \ ApplicationFunction.F \ ApplicationFunction_t.F \ bndry_iset.F \ JacobianBoundaryConditions.F \ RHSFunction.F \ RHSFunction_t.F \ RHSJacobian.F \ RHSJacobian_t.F \ blockdata.f \ bndvflx.F \ clearmem.F \ lhsbc5.F \ lhsbc6.F \ exgeo.F \ newgeo.F \ ghost.F \ ghost2.F \ init.F \ iset.F \ iset_t.F \ main.F \ matsch.F \ MotionSolver.F \ myTS.F \ nodres.F \ nodres_t.F \ noname.f \ printmat2.F \ printmat.F \ printmatmm.F \ qb.F \ rdat.F \ readat.F \ rgrdpts.F \ rhsbc1.F \ rhsbc4.F \ rhsbc5.F \ rhsbc5c.F \ sclsch.F \ setbc4laplace.F \ setibc.F \ seterr.F \ setupRHS.F \ setupRHS_t.F \ setupLHS_b.F \ solzne.F \ MatAllocaSeq.F \ test.F \ tmodel.F \ turbini.F \ turbsch.F \ update2.F \ update3.F \ update4.F \ weakbc.F SOURCEH = OBJSC = OBJSF = \ getidx.o \ ApplicationFunction.o \ ApplicationFunction_t.o \ bndry_iset.o \ JacobianBoundaryConditions.o \ RHSFunction.o \ RHSFunction_t.o \ RHSJacobian.o \ RHSJacobian_t.o \ blockdata.o \ bndvflx.o \ clearmem.o \ lhsbc5.o \ lhsbc6.o \ exgeo.o \ newgeo.o \ ghost.o \ ghost2.o \ init.o \ iset.o \ iset_t.o \ main.o \ matsch.o \ MotionSolver.o \ myTS.o \ nodres.o \ nodres_t.o \ noname.o \ printmat2.o \ printmat.o \ printmatmm.o \ qb.o \ rdat.o \ readat.o \ rgrdpts.o \ rhsbc1.o \ rhsbc4.o \ rhsbc5.o \ rhsbc5c.o \ sclsch.o \ setbc4laplace.o \ setibc.o \ seterr.o \ setupRHS.o \ setupRHS_t.o \ setupLHS_b.o \ solzne.o \ MatAllocaSeq.o \ test.o \ tmodel.o \ turbini.o \ turbsch.o \ update2.o \ update3.o \ update4.o \ weakbc.o LIBBASE = #LIBFLAGS = -L$(HOME)/lib/$(PETSC_ARCH) -lfxdr -lport -lmynag -lskit LIBFLAGS = -L$(HOME)/lib/$(PETSC_ARCH) -lfxdr -lport -lsparse-blas -lskit -ltirpc LIBS = \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libscalar.a \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libeuler.a \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libspl.a \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libns.a \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libturbo.a \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libgeo.a \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libchem.a \ $(FSPL_DIR)/lib/$(PETSC_ARCH)/libutil.a # # CLDFILES to be defined only for CRAY # #CLDFILES = dp_lapack.cld dp_blas.cld pat.cld #CLDFILES = dp_lapack.cld dp_blas.cld #look:; @echo $(SOURCEALL) $(OBJSF) look:; @echo "Look man! Isn't it weird? " $(PETSC_FCOMPILE) $(PROGRAM): $(OBJSF) $(OBJSC) $(LIBS) -$(FLINKER) $(CLDFILES) -o $(PROGRAM) $(OBJSF) $(OBJSC) $(LIBS) \ $(PETSC_FORTRAN_LIB) $(PETSC_LIB) $(LIBFLAGS) $(FSPL_DIR)/lib/$(PETSC_ARCH)/libgeo.a: cd $(FSPL_DIR)/src/geometry; $(MAKE) install $(FSPL_DIR)/lib/$(PETSC_ARCH)/libeuler.a: cd $(FSPL_DIR)/src/euler; $(MAKE) install $(FSPL_DIR)/lib/$(PETSC_ARCH)/libspl.a: cd $(FSPL_DIR)/src/schemes; $(MAKE) install $(FSPL_DIR)/lib/$(PETSC_ARCH)/libns.a: cd $(FSPL_DIR)/src/navier-stokes; $(MAKE) install $(FSPL_DIR)/lib/$(PETSC_ARCH)/libscalar.a: cd $(FSPL_DIR)/src/scalar; $(MAKE) install $(FSPL_DIR)/lib/$(PETSC_ARCH)/libturbo.a: cd $(FSPL_DIR)/src/turbo; $(MAKE) install $(FSPL_DIR)/lib/$(PETSC_ARCH)/libchem.a: cd $(FSPL_DIR)/src/chemistry; $(MAKE) install $(FSPL_DIR)/lib/$(PETSC_ARCH)/libutil.a: cd $(FSPL_DIR)/src/util; $(MAKE) install #.SUFFIXES: #.F.o: # $(U_FC) -c $(FFLAGS) $(FCPPFLAGS) $< #.f.o: # $(U_FC) -c $(FFLAGS) $< ######### checkout:; @co $(SOURCEF) install: $(PROGRAM) @echo Installing $(PROGRAM) in $(DEST) @if [ $(DEST) != . ]; then \ (rm -f $(DEST)/$(PROGRAM); $(INSTALL) $(PROGRAM) $(DEST)); fi ### blockdata.o: ../../include/paramt.h ../../include/bnd.h \ ../../include/constants.h ../../include/bnd.com \ ../../include/conv.com ../../include/implicit.h \ ../../include/nboun.com ../../include/three.com bndvflx.o: ../../include/paramt.h ../../include/constants.h \ ../../include/implicit.h ../../include/bnd.h ../../include/bnd.com \ ../../include/three.com ../../include/nloc.com ../../include/flags.com \ ../../include/stream.com ../../include/io.com exgeo.o: ../../include/io.com ../../include/constants.h ../../include/nloc.com iset.o: ../../include/iset.com lhsbc5.o: ../../include/iset.com main.o: ../../include/stack.com matsch.o: ../../include/flags.com mshcnt.o: ../../include/verbose.com ../../include/io.com nodres.o: ../../include/paramt.h ../../include/bnd.h ../../include/constants.h \ ../../include/bnd.com ../../include/nloc.com ../../include/flags.com \ ../../include/stream.com ../../include/conv.com \ ../../include/nboun.com ../../include/implicit.h ../../include/io.com psub.o: ../../include/constants.h ../../include/paramt.h ../../include/nloc.com \ ../../include/flags.com rdat.o: ../../include/paramt.h ../../include/bnd.h ../../include/implicit.h \ ../../include/visco.com ../../include/constants.h \ ../../include/conv.com ../../include/stream.com \ ../../include/chorin.com ../../include/scalar.com \ ../../include/flags.com ../../include/turb.com \ ../../include/bnd.com ../../include/io.com \ ../../include/verbose.com readat.o: ../../include/constants.h ../../include/bnd.h ../../include/paramt.h \ ../../include/io.com ../../include/nloc.com ../../include/flags.com \ ../../include/stream.com rhsbc1.o: ../../include/paramt.h ../../include/constants.h \ ../../include/iset.com rhsbc4.o: ../../include/paramt.h ../../include/constants.h \ ../../include/iset.com rhsbc5.o: ../../include/paramt.h ../../include/iset.com \ ../../include/constants.h sclsch.o: ../../include/flags.com solzne.o: ../../include/io.com turbcomp.o: ../../include/paramt.h ../../include/constants.h ../../include/nloc.com \ ../../include/three.com ../../include/flags.com ../../include/turb.com \ ../../include/trip.com ../../include/visco.com \ ../../include/nboun.com ../../include/implicit.h ../../include/io.com update2.o: ../../include/constants.h ../../include/paramt.h \ ../../include/conv.com ../../include/nloc.com ../../include/verbose.com \ ../../include/implicit.h ../../include/iset.com \ ../../include/flags.com ../../include/io.com update3.o: ../../include/constants.h ../../include/paramt.h \ ../../include/implicit.h ../../include/conv.com ../../include/nloc.com \ ../../include/verbose.com ../../include/iset.com \ ../../include/flags.com ../../include/io.com update4.o: ../../include/constants.h ../../include/paramt.h \ ../../include/conv.com ../../include/nboun.com ../../include/nloc.com \ ../../include/verbose.com ../../include/implicit.h ../../include/io.com weakbc.o: ../../include/paramt.h ../../include/constants.h ../../include/bnd.h \ ../../include/bnd.com ../../include/three.com ../../include/nloc.com \ ../../include/implicit.h rotaterhs.f rotaterhs2.f From knepley at gmail.com Fri Apr 24 06:19:17 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Apr 2020 07:19:17 -0400 Subject: [petsc-users] makefile changes since release 12 In-Reply-To: References: Message-ID: On Fri, Apr 24, 2020 at 4:34 AM Aldo Bonfiglioli wrote: > Hi there, > > the makefile I have been using for ages (up to 11.4) now fails with 12.5. > Yep, it is not that easy to reconstruct. I believe that the Fortran build rule used to be generated by configure. We stopped doing that, and added a generic rule in, but since we have no Fortran code in PETSc, it did not get properly tested. I think the fix is that this rule, like the rules for C and CXX, should use PETSC_FCOMPILE_SINGLE. However, that does not exist. Can you try the branch below? https://gitlab.com/petsc/petsc/-/commits/knepley/fix-fortran-rule git checkout knepley/fix-fortran-rule If it works for you, I will put in an MR for it. Thanks, Matt > I noticed that there have been several changes in > > include $(PETSC_DIR)/lib/petsc/conf/variables > include $(PETSC_DIR)/lib/petsc/conf/rules > > btw. the two aforementioned versions. > > If I'm not wrong, *.F files should now be compiled with: > > > > > .F.o .F90.o .F95.o: > > ${PETSC_FCOMPILE} -o $@ $< > However, in ${PETSC_FCOMPILE} there are also my ${SOURCEF} fortran > sources, > > so that I get the following compilation error: > > > gfortran -c -fPIC -Wall -ffree-line-length-0 > > -Wno-unused-dummy-argument -g -I../../include/ -I. > > -I/home/abonfi/src/petsc-3.12.5/include > > -I/home/abonfi/src/petsc-3.12.5/linux_gnu/include getidx.f > > ApplicationFunction.F ApplicationFunction_t.F bndry_iset.F > > JacobianBoundaryConditions.F RHSFunction.F RHSFunction_t.F > > RHSJacobian.F RHSJacobian_t.F blockdata.f bndvflx.F clearmem.F > > lhsbc5.F lhsbc6.F exgeo.F newgeo.F ghost.F ghost2.F init.F iset.F > > iset_t.F main.F matsch.F MotionSolver.F myTS.F nodres.F nodres_t.F > > noname.f printmat2.F printmat.F printmatmm.F qb.F rdat.F readat.F > > rgrdpts.F rhsbc1.F rhsbc4.F rhsbc5.F rhsbc5c.F sclsch.F > > setbc4laplace.F setibc.F seterr.F setupRHS.F setupRHS_t.F setupLHS_b.F > > solzne.F MatAllocaSeq.F test.F tmodel.F turbini.F turbsch.F update2.F > > update3.F update4.F weakbc.F -o ApplicationFunction_t.o > > ApplicationFunction_t.F > > gfortran: fatal error: cannot specify ?-o? with ?-c?, ?-S? or ?-E? > > with multiple files > > compilation terminated. > > If I remove ${SOURCEF} from the ${PETSC_FCOMPILE} definition in > > $(PETSC_DIR)/lib/petsc/conf/variables > > things work, but I am not sure that this is the right thing to do. > > Thanks, > > Aldo > > PS My makefile is attached > > -- > Dr. Aldo Bonfiglioli > Associate professor of Fluid Machines > Scuola di Ingegneria > Universita' della Basilicata > V.le dell'Ateneo lucano, 10 85100 Potenza ITALY > tel:+39.0971.205203 fax:+39.0971.205215 > web: http://docenti.unibas.it/site/home/docente.html?m=002423 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Apr 24 09:13:22 2020 From: jed at jedbrown.org (Jed Brown) Date: Fri, 24 Apr 2020 08:13:22 -0600 Subject: [petsc-users] makefile changes since release 12 In-Reply-To: References: Message-ID: <87eesd55vh.fsf@jedbrown.org> I'd say yes, that's the right thing. These are vestigial remnants of the legacy make system. I believe PETSc doesn't use this anywhere internally, and thus can be removed. I don't know if the following patch would break any existing correct usage. diff --git i/lib/petsc/conf/variables w/lib/petsc/conf/variables index a7de68e2ae..07bc4254c9 100644 --- i/lib/petsc/conf/variables +++ w/lib/petsc/conf/variables @@ -36,13 +36,13 @@ F_SH_LIB_PATH = ${PETSC_F_SH_LIB_PATH} PSOURCEC = $(SOURCEC:%=`pwd`/%) PSOURCECXX= $(SOURCECXX:%=`pwd`/%) PSOURCECU = $(SOURCECU:%=`pwd`/%) -PETSC_COMPILE = ${PCC} -c ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} ${PSOURCEC} +PETSC_COMPILE = ${PCC} -c ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} PETSC_CCOMPILE = ${CC} -c ${CC_FLAGS} ${CPPFLAGS} ${PETSC_CC_INCLUDES} -PETSC_CXXCOMPILE = ${CXX} -c ${CXX_FLAGS} ${CXXFLAGS} ${CXXCPPFLAGS} ${PSOURCECXX} +PETSC_CXXCOMPILE = ${CXX} -c ${CXX_FLAGS} ${CXXFLAGS} ${CXXCPPFLAGS} PETSC_COMPILE_SINGLE = ${PCC} -o $*.o -c ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} PETSC_CXXCOMPILE_SINGLE = ${CXX} -o $*.o -c ${CXX_FLAGS} ${CXXFLAGS} ${CXXCPPFLAGS} -PETSC_FCOMPILE = ${FC} -c ${FC_FLAGS} ${FFLAGS} ${FCPPFLAGS} ${SOURCEF} ${SOURCEF90} -PETSC_CUCOMPILE = ${CUDAC} ${CUDAC_FLAGS} -c --compiler-options="${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}" ${PSOURCECU} +PETSC_FCOMPILE = ${FC} -c ${FC_FLAGS} ${FFLAGS} ${FCPPFLAGS} +PETSC_CUCOMPILE = ${CUDAC} ${CUDAC_FLAGS} -c --compiler-options="${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}" PETSC_CUCOMPILE_SINGLE = ${CUDAC} -o $*.o ${CUDAC_FLAGS} -c --compiler-options="${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}" # # define OBJSC OBJSCXX and OBJSF OBJSCU Crossing over to dev discussion, I wish we could make these names more standard, like the following predefined ones (see `make -p`). COMPILE.F = $(FC) $(FFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c %.o: %.F # recipe to execute (built-in): $(COMPILE.F) $(OUTPUT_OPTION) $< LINK.F = $(FC) $(FFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) %: %.F # recipe to execute (built-in): $(LINK.F) $^ $(LOADLIBES) $(LDLIBS) -o $@ So we could define PETSC_COMPILE.F, PETSC_LINK.F, etc., and a user could write COMPILE.F = PETSC_COMPILE.F if all their *.F sources use PETSc, in which case all the standard rules would work. Or, they could use PETSc-enabled build only for a subset of their sources: $(SOURCES_USING_PETSC:%.F=%.o) : %.o : %.F $(PETSC_COMPILE.F) $(OUTPUT_OPTION) $< Aldo Bonfiglioli writes: > Hi there, > > the makefile I have been using for ages (up to 11.4) now fails with 12.5. > > I noticed that there have been several changes in > > include $(PETSC_DIR)/lib/petsc/conf/variables > include $(PETSC_DIR)/lib/petsc/conf/rules > > btw. the two aforementioned versions. > > If I'm not wrong, *.F files should now be compiled with: > >> >> .F.o .F90.o .F95.o: >> ??????? ${PETSC_FCOMPILE} -o $@ $< > However,? in ${PETSC_FCOMPILE} there are also my ${SOURCEF} fortran sources, > > so that I get the following compilation error: > >> gfortran -c -fPIC -Wall -ffree-line-length-0 >> -Wno-unused-dummy-argument -g -I../../include/ -I. >> -I/home/abonfi/src/petsc-3.12.5/include >> -I/home/abonfi/src/petsc-3.12.5/linux_gnu/include???? getidx.f >> ApplicationFunction.F ApplicationFunction_t.F bndry_iset.F >> JacobianBoundaryConditions.F RHSFunction.F RHSFunction_t.F >> RHSJacobian.F RHSJacobian_t.F blockdata.f bndvflx.F clearmem.F >> lhsbc5.F lhsbc6.F exgeo.F newgeo.F ghost.F ghost2.F init.F iset.F >> iset_t.F main.F matsch.F MotionSolver.F myTS.F nodres.F nodres_t.F >> noname.f printmat2.F printmat.F printmatmm.F qb.F rdat.F readat.F >> rgrdpts.F rhsbc1.F rhsbc4.F rhsbc5.F rhsbc5c.F sclsch.F >> setbc4laplace.F setibc.F seterr.F setupRHS.F setupRHS_t.F setupLHS_b.F >> solzne.F MatAllocaSeq.F test.F tmodel.F turbini.F turbsch.F update2.F >> update3.F update4.F weakbc.F -o ApplicationFunction_t.o >> ApplicationFunction_t.F >> gfortran: fatal error: cannot specify ?-o? with ?-c?, ?-S? or ?-E? >> with multiple files >> compilation terminated. > > If I remove ${SOURCEF} from the ${PETSC_FCOMPILE} definition in > > $(PETSC_DIR)/lib/petsc/conf/variables > > things work, but I am not sure that this is the right thing to do. > > Thanks, > > Aldo > > PS My makefile is attached > > -- > Dr. Aldo Bonfiglioli > Associate professor of Fluid Machines > Scuola di Ingegneria > Universita' della Basilicata > V.le dell'Ateneo lucano, 10 85100 Potenza ITALY > tel:+39.0971.205203 fax:+39.0971.205215 > web: http://docenti.unibas.it/site/home/docente.html?m=002423 > > VERSION = 3.12.0 > PROGRAM = eulfs$(VERSION)-$(PETSC_ARCH) > all: $(PROGRAM) > FFLAGS = -I../../include/ -I. > FCPPFLAGS = $(FCPPFLAGS) -I../../include/ -I. > CFLAGS = > # > # ad hoc fix for SP3 > # > #FCPPFLAGS = ${PETSC_INCLUDE} ${PCONF} ${PETSCFLAGS} ${PETSC_PARCH} \ > ${FPPFLAGS} -I$(FSPL_DIR)/include/ > > include $(PETSC_DIR)/lib/petsc/conf/variables > include $(PETSC_DIR)/lib/petsc/conf/rules > > > DEST = $(HOME)/bin/$(PETSC_ARCH) > INSTALL = cp > SOURCEC = > SOURCEF = \ > getidx.f \ > ApplicationFunction.F \ > ApplicationFunction_t.F \ > bndry_iset.F \ > JacobianBoundaryConditions.F \ > RHSFunction.F \ > RHSFunction_t.F \ > RHSJacobian.F \ > RHSJacobian_t.F \ > blockdata.f \ > bndvflx.F \ > clearmem.F \ > lhsbc5.F \ > lhsbc6.F \ > exgeo.F \ > newgeo.F \ > ghost.F \ > ghost2.F \ > init.F \ > iset.F \ > iset_t.F \ > main.F \ > matsch.F \ > MotionSolver.F \ > myTS.F \ > nodres.F \ > nodres_t.F \ > noname.f \ > printmat2.F \ > printmat.F \ > printmatmm.F \ > qb.F \ > rdat.F \ > readat.F \ > rgrdpts.F \ > rhsbc1.F \ > rhsbc4.F \ > rhsbc5.F \ > rhsbc5c.F \ > sclsch.F \ > setbc4laplace.F \ > setibc.F \ > seterr.F \ > setupRHS.F \ > setupRHS_t.F \ > setupLHS_b.F \ > solzne.F \ > MatAllocaSeq.F \ > test.F \ > tmodel.F \ > turbini.F \ > turbsch.F \ > update2.F \ > update3.F \ > update4.F \ > weakbc.F > SOURCEH = > OBJSC = > OBJSF = \ > getidx.o \ > ApplicationFunction.o \ > ApplicationFunction_t.o \ > bndry_iset.o \ > JacobianBoundaryConditions.o \ > RHSFunction.o \ > RHSFunction_t.o \ > RHSJacobian.o \ > RHSJacobian_t.o \ > blockdata.o \ > bndvflx.o \ > clearmem.o \ > lhsbc5.o \ > lhsbc6.o \ > exgeo.o \ > newgeo.o \ > ghost.o \ > ghost2.o \ > init.o \ > iset.o \ > iset_t.o \ > main.o \ > matsch.o \ > MotionSolver.o \ > myTS.o \ > nodres.o \ > nodres_t.o \ > noname.o \ > printmat2.o \ > printmat.o \ > printmatmm.o \ > qb.o \ > rdat.o \ > readat.o \ > rgrdpts.o \ > rhsbc1.o \ > rhsbc4.o \ > rhsbc5.o \ > rhsbc5c.o \ > sclsch.o \ > setbc4laplace.o \ > setibc.o \ > seterr.o \ > setupRHS.o \ > setupRHS_t.o \ > setupLHS_b.o \ > solzne.o \ > MatAllocaSeq.o \ > test.o \ > tmodel.o \ > turbini.o \ > turbsch.o \ > update2.o \ > update3.o \ > update4.o \ > weakbc.o > LIBBASE = > #LIBFLAGS = -L$(HOME)/lib/$(PETSC_ARCH) -lfxdr -lport -lmynag -lskit > LIBFLAGS = -L$(HOME)/lib/$(PETSC_ARCH) -lfxdr -lport -lsparse-blas -lskit -ltirpc > LIBS = \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libscalar.a \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libeuler.a \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libspl.a \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libns.a \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libturbo.a \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libgeo.a \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libchem.a \ > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libutil.a > # > # CLDFILES to be defined only for CRAY > # > #CLDFILES = dp_lapack.cld dp_blas.cld pat.cld > #CLDFILES = dp_lapack.cld dp_blas.cld > > #look:; @echo $(SOURCEALL) $(OBJSF) > look:; @echo "Look man! Isn't it weird? " $(PETSC_FCOMPILE) > > > $(PROGRAM): $(OBJSF) $(OBJSC) $(LIBS) > -$(FLINKER) $(CLDFILES) -o $(PROGRAM) $(OBJSF) $(OBJSC) $(LIBS) \ > $(PETSC_FORTRAN_LIB) $(PETSC_LIB) $(LIBFLAGS) > > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libgeo.a: > cd $(FSPL_DIR)/src/geometry; $(MAKE) install > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libeuler.a: > cd $(FSPL_DIR)/src/euler; $(MAKE) install > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libspl.a: > cd $(FSPL_DIR)/src/schemes; $(MAKE) install > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libns.a: > cd $(FSPL_DIR)/src/navier-stokes; $(MAKE) install > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libscalar.a: > cd $(FSPL_DIR)/src/scalar; $(MAKE) install > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libturbo.a: > cd $(FSPL_DIR)/src/turbo; $(MAKE) install > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libchem.a: > cd $(FSPL_DIR)/src/chemistry; $(MAKE) install > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libutil.a: > cd $(FSPL_DIR)/src/util; $(MAKE) install > > > #.SUFFIXES: > #.F.o: > # $(U_FC) -c $(FFLAGS) $(FCPPFLAGS) $< > #.f.o: > # $(U_FC) -c $(FFLAGS) $< > ######### > > > checkout:; @co $(SOURCEF) > > install: $(PROGRAM) > @echo Installing $(PROGRAM) in $(DEST) > @if [ $(DEST) != . ]; then \ > (rm -f $(DEST)/$(PROGRAM); $(INSTALL) $(PROGRAM) $(DEST)); fi > ### > blockdata.o: ../../include/paramt.h ../../include/bnd.h \ > ../../include/constants.h ../../include/bnd.com \ > ../../include/conv.com ../../include/implicit.h \ > ../../include/nboun.com ../../include/three.com > bndvflx.o: ../../include/paramt.h ../../include/constants.h \ > ../../include/implicit.h ../../include/bnd.h ../../include/bnd.com \ > ../../include/three.com ../../include/nloc.com ../../include/flags.com \ > ../../include/stream.com ../../include/io.com > exgeo.o: ../../include/io.com ../../include/constants.h ../../include/nloc.com > iset.o: ../../include/iset.com > lhsbc5.o: ../../include/iset.com > main.o: ../../include/stack.com > matsch.o: ../../include/flags.com > mshcnt.o: ../../include/verbose.com ../../include/io.com > nodres.o: ../../include/paramt.h ../../include/bnd.h ../../include/constants.h \ > ../../include/bnd.com ../../include/nloc.com ../../include/flags.com \ > ../../include/stream.com ../../include/conv.com \ > ../../include/nboun.com ../../include/implicit.h ../../include/io.com > psub.o: ../../include/constants.h ../../include/paramt.h ../../include/nloc.com \ > ../../include/flags.com > rdat.o: ../../include/paramt.h ../../include/bnd.h ../../include/implicit.h \ > ../../include/visco.com ../../include/constants.h \ > ../../include/conv.com ../../include/stream.com \ > ../../include/chorin.com ../../include/scalar.com \ > ../../include/flags.com ../../include/turb.com \ > ../../include/bnd.com ../../include/io.com \ > ../../include/verbose.com > readat.o: ../../include/constants.h ../../include/bnd.h ../../include/paramt.h \ > ../../include/io.com ../../include/nloc.com ../../include/flags.com \ > ../../include/stream.com > rhsbc1.o: ../../include/paramt.h ../../include/constants.h \ > ../../include/iset.com > rhsbc4.o: ../../include/paramt.h ../../include/constants.h \ > ../../include/iset.com > rhsbc5.o: ../../include/paramt.h ../../include/iset.com \ > ../../include/constants.h > sclsch.o: ../../include/flags.com > solzne.o: ../../include/io.com > turbcomp.o: ../../include/paramt.h ../../include/constants.h ../../include/nloc.com \ > ../../include/three.com ../../include/flags.com ../../include/turb.com \ > ../../include/trip.com ../../include/visco.com \ > ../../include/nboun.com ../../include/implicit.h ../../include/io.com > update2.o: ../../include/constants.h ../../include/paramt.h \ > ../../include/conv.com ../../include/nloc.com ../../include/verbose.com \ > ../../include/implicit.h ../../include/iset.com \ > ../../include/flags.com ../../include/io.com > update3.o: ../../include/constants.h ../../include/paramt.h \ > ../../include/implicit.h ../../include/conv.com ../../include/nloc.com \ > ../../include/verbose.com ../../include/iset.com \ > ../../include/flags.com ../../include/io.com > update4.o: ../../include/constants.h ../../include/paramt.h \ > ../../include/conv.com ../../include/nboun.com ../../include/nloc.com \ > ../../include/verbose.com ../../include/implicit.h ../../include/io.com > weakbc.o: ../../include/paramt.h ../../include/constants.h ../../include/bnd.h \ > ../../include/bnd.com ../../include/three.com ../../include/nloc.com \ > ../../include/implicit.h rotaterhs.f rotaterhs2.f From Antoine.Cote3 at USherbrooke.ca Fri Apr 24 09:47:45 2020 From: Antoine.Cote3 at USherbrooke.ca (=?iso-8859-1?Q?Antoine_C=F4t=E9?=) Date: Fri, 24 Apr 2020 14:47:45 +0000 Subject: [petsc-users] Vec sizing using DMDA In-Reply-To: References: , Message-ID: Hi, Thanks for the fast response! An array of Vec would indeed solve my problem. I just don't know how to allocate it. Say I have a Vec U of the right size (created with a DMDA), and nlc = 4 load cases. How should I allocate and initialize the array? Best regards ________________________________ De : Matthew Knepley Envoy? : 23 avril 2020 15:23 ? : Antoine C?t? Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] Vec sizing using DMDA On Thu, Apr 23, 2020 at 12:12 PM Antoine C?t? > wrote: Hi, I'm using a C++/PETSc program to do Topological Optimization. A finite element analysis is solved at every iteration of the optimization. Displacements U are obtained using KSP solver. U is a Vec created using a 3D DMDA with 3 DOF (ux, uy, uz). Boundary conditions are stored in Vec N, and forces in Vec RHS. They also have 3 DOF, as they are created using VecDuplicate on U. My problem : I have multiple load cases (i.e. different sets of boundary conditions (b.c.) and forces). Displacements U are solved for each load case. I need to extract rapidly the b.c. and forces for each load case before solving. One way would be to change the DOF of the DMDA (e.g. for 8 load cases, we could use 3*8=24 DOF). Problem is, prior solving, we would need to loop on nodes to extract the b.c. and forces, for every node, for every load case and for every iteration of the optimization. This is a waste of time, as b.c. and forces are constant for a given load case. A better way would be to assemble b.c. and forces for every load case once, and read them afterwards as needed. This is currently done using a VecDuplicate on U to create multiple vectors N and RHS (N_0, N_1, RHS_0, RHS_1, etc.). Those vectors are hard coded, and can only solve a set number of load cases. I'm looking for a way to allocate dynamically the number of N and RHS vectors. What I would like : Given nlc, the number of load cases and nn, the number of nodes in the DMDA. Create matrices N and RHS of size (DOF*nn lines, nlc columns). While optimizing : for every load case, use N[all lines, current load case column] and RHS[all lines, current load case column], solve with KSP, obtain displacement U[all lines, current load case]. Would that be possible? Why wouldn't you just allocate an array of Vecs, since you only use one at a time? Thanks, Matt Best regards, Antoine C?t? -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 24 10:10:14 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Apr 2020 11:10:14 -0400 Subject: [petsc-users] Vec sizing using DMDA In-Reply-To: References: Message-ID: On Fri, Apr 24, 2020 at 10:47 AM Antoine C?t? wrote: > Hi, > > Thanks for the fast response! An array of Vec would indeed solve my > problem. I just don't know how to allocate it. Say I have a Vec U of the > right size (created with a DMDA), and nlc = 4 load cases. How should I > allocate and initialize the array? > Vec *rhs; PetscInt i; ierr = PetscMalloc1(N, &rhs);CHKERRQ(ierr); for (i = 0; i < N; ++I) { ierr = DMCreateGlobalVector(dm, &rhs[i]);CHKERRQ(ierr); } /* Access vector with rhs[i] */ for (i = 0; i < N; ++I) { ierr = DMDestroyGlobalVector(dm, &rhs[i]);CHKERRQ(ierr); } ierr = PetscFree(rhs); Thanks, Matt Best regards > > ------------------------------ > *De :* Matthew Knepley > *Envoy? :* 23 avril 2020 15:23 > *? :* Antoine C?t? > *Cc :* petsc-users at mcs.anl.gov > *Objet :* Re: [petsc-users] Vec sizing using DMDA > > On Thu, Apr 23, 2020 at 12:12 PM Antoine C?t? < > Antoine.Cote3 at usherbrooke.ca> wrote: > > Hi, > > I'm using a C++/PETSc program to do Topological Optimization. A finite > element analysis is solved at every iteration of the optimization. > Displacements U are obtained using KSP solver. U is a Vec created using a > 3D DMDA with 3 DOF (ux, uy, uz). Boundary conditions are stored in Vec N, > and forces in Vec RHS. They also have 3 DOF, as they are created using > VecDuplicate on U. > > My problem : I have multiple load cases (i.e. different sets of boundary > conditions (b.c.) and forces). Displacements U are solved for each load > case. I need to extract rapidly the b.c. and forces for each load case > before solving. > > One way would be to change the DOF of the DMDA (e.g. for 8 load cases, we > could use 3*8=24 DOF). Problem is, prior solving, we would need to loop on > nodes to extract the b.c. and forces, for every node, for every load case > and for every iteration of the optimization. This is a waste of time, as > b.c. and forces are constant for a given load case. > > A better way would be to assemble b.c. and forces for every load case > once, and read them afterwards as needed. This is currently done using a > VecDuplicate on U to create multiple vectors N and RHS (N_0, N_1, RHS_0, > RHS_1, etc.). Those vectors are hard coded, and can only solve a set number > of load cases. > > I'm looking for a way to allocate dynamically the number of N and RHS > vectors. What I would like : > Given nlc, the number of load cases and nn, the number of nodes in the > DMDA. Create matrices N and RHS of size (DOF*nn lines, nlc columns). While > optimizing : for every load case, use N[all lines, current load case > column] and RHS[all lines, current load case column], solve with KSP, > obtain displacement U[all lines, current load case]. > > Would that be possible? > > > Why wouldn't you just allocate an array of Vecs, since you only use one at > a time? > > Thanks, > > Matt > > > Best regards, > > Antoine C?t? > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 24 10:27:31 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 24 Apr 2020 10:27:31 -0500 (CDT) Subject: [petsc-users] makefile changes since release 12 In-Reply-To: <87eesd55vh.fsf@jedbrown.org> References: <87eesd55vh.fsf@jedbrown.org> Message-ID: On Fri, 24 Apr 2020, Jed Brown wrote: > I'd say yes, that's the right thing. These are vestigial remnants of > the legacy make system. > > I believe PETSc doesn't use this anywhere internally, and thus can be > removed. I don't know if the following patch would break any existing > correct usage. > > diff --git i/lib/petsc/conf/variables w/lib/petsc/conf/variables > index a7de68e2ae..07bc4254c9 100644 > --- i/lib/petsc/conf/variables > +++ w/lib/petsc/conf/variables > @@ -36,13 +36,13 @@ F_SH_LIB_PATH = ${PETSC_F_SH_LIB_PATH} > PSOURCEC = $(SOURCEC:%=`pwd`/%) > PSOURCECXX= $(SOURCECXX:%=`pwd`/%) > PSOURCECU = $(SOURCECU:%=`pwd`/%) > -PETSC_COMPILE = ${PCC} -c ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} ${PSOURCEC} > +PETSC_COMPILE = ${PCC} -c ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} > PETSC_CCOMPILE = ${CC} -c ${CC_FLAGS} ${CPPFLAGS} ${PETSC_CC_INCLUDES} > -PETSC_CXXCOMPILE = ${CXX} -c ${CXX_FLAGS} ${CXXFLAGS} ${CXXCPPFLAGS} ${PSOURCECXX} > +PETSC_CXXCOMPILE = ${CXX} -c ${CXX_FLAGS} ${CXXFLAGS} ${CXXCPPFLAGS} > PETSC_COMPILE_SINGLE = ${PCC} -o $*.o -c ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} > PETSC_CXXCOMPILE_SINGLE = ${CXX} -o $*.o -c ${CXX_FLAGS} ${CXXFLAGS} ${CXXCPPFLAGS} > -PETSC_FCOMPILE = ${FC} -c ${FC_FLAGS} ${FFLAGS} ${FCPPFLAGS} ${SOURCEF} ${SOURCEF90} > -PETSC_CUCOMPILE = ${CUDAC} ${CUDAC_FLAGS} -c --compiler-options="${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}" ${PSOURCECU} > +PETSC_FCOMPILE = ${FC} -c ${FC_FLAGS} ${FFLAGS} ${FCPPFLAGS} > +PETSC_CUCOMPILE = ${CUDAC} ${CUDAC_FLAGS} -c --compiler-options="${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}" > PETSC_CUCOMPILE_SINGLE = ${CUDAC} -o $*.o ${CUDAC_FLAGS} -c --compiler-options="${PCC_FLAGS} ${CFLAGS} ${CCPPFLAGS}" > # > # define OBJSC OBJSCXX and OBJSF OBJSCU I think Matt's approach is good for maint. Wrt cleanup - likely we just need PETSC_COMPILE_SINGLE and related stuff and not PETSC_COMPILE and related stuff > > > > Crossing over to dev discussion, I wish we could make these names more > standard, like the following predefined ones (see `make -p`). > > COMPILE.F = $(FC) $(FFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c > %.o: %.F > # recipe to execute (built-in): > $(COMPILE.F) $(OUTPUT_OPTION) $< > LINK.F = $(FC) $(FFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) > %: %.F > # recipe to execute (built-in): > $(LINK.F) $^ $(LOADLIBES) $(LDLIBS) -o $@ > > > So we could define PETSC_COMPILE.F, PETSC_LINK.F, etc., and a user could write > > COMPILE.F = PETSC_COMPILE.F > > if all their *.F sources use PETSc, in which case all the standard rules > would work. Or, they could use PETSc-enabled build only for a subset of > their sources: > > $(SOURCES_USING_PETSC:%.F=%.o) : %.o : %.F > $(PETSC_COMPILE.F) $(OUTPUT_OPTION) $< Hm - I thought we were going to leave the current rules file to be portable - and have gnumakefile syntax available via share/petsc/Makefile.user Hence we have similar rules [as above] in lib/petsc/conf/test Satish > > Aldo Bonfiglioli writes: > > > Hi there, > > > > the makefile I have been using for ages (up to 11.4) now fails with 12.5. > > > > I noticed that there have been several changes in > > > > include $(PETSC_DIR)/lib/petsc/conf/variables > > include $(PETSC_DIR)/lib/petsc/conf/rules > > > > btw. the two aforementioned versions. > > > > If I'm not wrong, *.F files should now be compiled with: > > > >> > >> .F.o .F90.o .F95.o: > >> ??????? ${PETSC_FCOMPILE} -o $@ $< > > However,? in ${PETSC_FCOMPILE} there are also my ${SOURCEF} fortran sources, > > > > so that I get the following compilation error: > > > >> gfortran -c -fPIC -Wall -ffree-line-length-0 > >> -Wno-unused-dummy-argument -g -I../../include/ -I. > >> -I/home/abonfi/src/petsc-3.12.5/include > >> -I/home/abonfi/src/petsc-3.12.5/linux_gnu/include???? getidx.f > >> ApplicationFunction.F ApplicationFunction_t.F bndry_iset.F > >> JacobianBoundaryConditions.F RHSFunction.F RHSFunction_t.F > >> RHSJacobian.F RHSJacobian_t.F blockdata.f bndvflx.F clearmem.F > >> lhsbc5.F lhsbc6.F exgeo.F newgeo.F ghost.F ghost2.F init.F iset.F > >> iset_t.F main.F matsch.F MotionSolver.F myTS.F nodres.F nodres_t.F > >> noname.f printmat2.F printmat.F printmatmm.F qb.F rdat.F readat.F > >> rgrdpts.F rhsbc1.F rhsbc4.F rhsbc5.F rhsbc5c.F sclsch.F > >> setbc4laplace.F setibc.F seterr.F setupRHS.F setupRHS_t.F setupLHS_b.F > >> solzne.F MatAllocaSeq.F test.F tmodel.F turbini.F turbsch.F update2.F > >> update3.F update4.F weakbc.F -o ApplicationFunction_t.o > >> ApplicationFunction_t.F > >> gfortran: fatal error: cannot specify ?-o? with ?-c?, ?-S? or ?-E? > >> with multiple files > >> compilation terminated. > > > > If I remove ${SOURCEF} from the ${PETSC_FCOMPILE} definition in > > > > $(PETSC_DIR)/lib/petsc/conf/variables > > > > things work, but I am not sure that this is the right thing to do. > > > > Thanks, > > > > Aldo > > > > PS My makefile is attached > > > > -- > > Dr. Aldo Bonfiglioli > > Associate professor of Fluid Machines > > Scuola di Ingegneria > > Universita' della Basilicata > > V.le dell'Ateneo lucano, 10 85100 Potenza ITALY > > tel:+39.0971.205203 fax:+39.0971.205215 > > web: http://docenti.unibas.it/site/home/docente.html?m=002423 > > > > VERSION = 3.12.0 > > PROGRAM = eulfs$(VERSION)-$(PETSC_ARCH) > > all: $(PROGRAM) > > FFLAGS = -I../../include/ -I. > > FCPPFLAGS = $(FCPPFLAGS) -I../../include/ -I. > > CFLAGS = > > # > > # ad hoc fix for SP3 > > # > > #FCPPFLAGS = ${PETSC_INCLUDE} ${PCONF} ${PETSCFLAGS} ${PETSC_PARCH} \ > > ${FPPFLAGS} -I$(FSPL_DIR)/include/ > > > > include $(PETSC_DIR)/lib/petsc/conf/variables > > include $(PETSC_DIR)/lib/petsc/conf/rules > > > > > > DEST = $(HOME)/bin/$(PETSC_ARCH) > > INSTALL = cp > > SOURCEC = > > SOURCEF = \ > > getidx.f \ > > ApplicationFunction.F \ > > ApplicationFunction_t.F \ > > bndry_iset.F \ > > JacobianBoundaryConditions.F \ > > RHSFunction.F \ > > RHSFunction_t.F \ > > RHSJacobian.F \ > > RHSJacobian_t.F \ > > blockdata.f \ > > bndvflx.F \ > > clearmem.F \ > > lhsbc5.F \ > > lhsbc6.F \ > > exgeo.F \ > > newgeo.F \ > > ghost.F \ > > ghost2.F \ > > init.F \ > > iset.F \ > > iset_t.F \ > > main.F \ > > matsch.F \ > > MotionSolver.F \ > > myTS.F \ > > nodres.F \ > > nodres_t.F \ > > noname.f \ > > printmat2.F \ > > printmat.F \ > > printmatmm.F \ > > qb.F \ > > rdat.F \ > > readat.F \ > > rgrdpts.F \ > > rhsbc1.F \ > > rhsbc4.F \ > > rhsbc5.F \ > > rhsbc5c.F \ > > sclsch.F \ > > setbc4laplace.F \ > > setibc.F \ > > seterr.F \ > > setupRHS.F \ > > setupRHS_t.F \ > > setupLHS_b.F \ > > solzne.F \ > > MatAllocaSeq.F \ > > test.F \ > > tmodel.F \ > > turbini.F \ > > turbsch.F \ > > update2.F \ > > update3.F \ > > update4.F \ > > weakbc.F > > SOURCEH = > > OBJSC = > > OBJSF = \ > > getidx.o \ > > ApplicationFunction.o \ > > ApplicationFunction_t.o \ > > bndry_iset.o \ > > JacobianBoundaryConditions.o \ > > RHSFunction.o \ > > RHSFunction_t.o \ > > RHSJacobian.o \ > > RHSJacobian_t.o \ > > blockdata.o \ > > bndvflx.o \ > > clearmem.o \ > > lhsbc5.o \ > > lhsbc6.o \ > > exgeo.o \ > > newgeo.o \ > > ghost.o \ > > ghost2.o \ > > init.o \ > > iset.o \ > > iset_t.o \ > > main.o \ > > matsch.o \ > > MotionSolver.o \ > > myTS.o \ > > nodres.o \ > > nodres_t.o \ > > noname.o \ > > printmat2.o \ > > printmat.o \ > > printmatmm.o \ > > qb.o \ > > rdat.o \ > > readat.o \ > > rgrdpts.o \ > > rhsbc1.o \ > > rhsbc4.o \ > > rhsbc5.o \ > > rhsbc5c.o \ > > sclsch.o \ > > setbc4laplace.o \ > > setibc.o \ > > seterr.o \ > > setupRHS.o \ > > setupRHS_t.o \ > > setupLHS_b.o \ > > solzne.o \ > > MatAllocaSeq.o \ > > test.o \ > > tmodel.o \ > > turbini.o \ > > turbsch.o \ > > update2.o \ > > update3.o \ > > update4.o \ > > weakbc.o > > LIBBASE = > > #LIBFLAGS = -L$(HOME)/lib/$(PETSC_ARCH) -lfxdr -lport -lmynag -lskit > > LIBFLAGS = -L$(HOME)/lib/$(PETSC_ARCH) -lfxdr -lport -lsparse-blas -lskit -ltirpc > > LIBS = \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libscalar.a \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libeuler.a \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libspl.a \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libns.a \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libturbo.a \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libgeo.a \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libchem.a \ > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libutil.a > > # > > # CLDFILES to be defined only for CRAY > > # > > #CLDFILES = dp_lapack.cld dp_blas.cld pat.cld > > #CLDFILES = dp_lapack.cld dp_blas.cld > > > > #look:; @echo $(SOURCEALL) $(OBJSF) > > look:; @echo "Look man! Isn't it weird? " $(PETSC_FCOMPILE) > > > > > > $(PROGRAM): $(OBJSF) $(OBJSC) $(LIBS) > > -$(FLINKER) $(CLDFILES) -o $(PROGRAM) $(OBJSF) $(OBJSC) $(LIBS) \ > > $(PETSC_FORTRAN_LIB) $(PETSC_LIB) $(LIBFLAGS) > > > > > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libgeo.a: > > cd $(FSPL_DIR)/src/geometry; $(MAKE) install > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libeuler.a: > > cd $(FSPL_DIR)/src/euler; $(MAKE) install > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libspl.a: > > cd $(FSPL_DIR)/src/schemes; $(MAKE) install > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libns.a: > > cd $(FSPL_DIR)/src/navier-stokes; $(MAKE) install > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libscalar.a: > > cd $(FSPL_DIR)/src/scalar; $(MAKE) install > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libturbo.a: > > cd $(FSPL_DIR)/src/turbo; $(MAKE) install > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libchem.a: > > cd $(FSPL_DIR)/src/chemistry; $(MAKE) install > > $(FSPL_DIR)/lib/$(PETSC_ARCH)/libutil.a: > > cd $(FSPL_DIR)/src/util; $(MAKE) install > > > > > > #.SUFFIXES: > > #.F.o: > > # $(U_FC) -c $(FFLAGS) $(FCPPFLAGS) $< > > #.f.o: > > # $(U_FC) -c $(FFLAGS) $< > > ######### > > > > > > checkout:; @co $(SOURCEF) > > > > install: $(PROGRAM) > > @echo Installing $(PROGRAM) in $(DEST) > > @if [ $(DEST) != . ]; then \ > > (rm -f $(DEST)/$(PROGRAM); $(INSTALL) $(PROGRAM) $(DEST)); fi > > ### > > blockdata.o: ../../include/paramt.h ../../include/bnd.h \ > > ../../include/constants.h ../../include/bnd.com \ > > ../../include/conv.com ../../include/implicit.h \ > > ../../include/nboun.com ../../include/three.com > > bndvflx.o: ../../include/paramt.h ../../include/constants.h \ > > ../../include/implicit.h ../../include/bnd.h ../../include/bnd.com \ > > ../../include/three.com ../../include/nloc.com ../../include/flags.com \ > > ../../include/stream.com ../../include/io.com > > exgeo.o: ../../include/io.com ../../include/constants.h ../../include/nloc.com > > iset.o: ../../include/iset.com > > lhsbc5.o: ../../include/iset.com > > main.o: ../../include/stack.com > > matsch.o: ../../include/flags.com > > mshcnt.o: ../../include/verbose.com ../../include/io.com > > nodres.o: ../../include/paramt.h ../../include/bnd.h ../../include/constants.h \ > > ../../include/bnd.com ../../include/nloc.com ../../include/flags.com \ > > ../../include/stream.com ../../include/conv.com \ > > ../../include/nboun.com ../../include/implicit.h ../../include/io.com > > psub.o: ../../include/constants.h ../../include/paramt.h ../../include/nloc.com \ > > ../../include/flags.com > > rdat.o: ../../include/paramt.h ../../include/bnd.h ../../include/implicit.h \ > > ../../include/visco.com ../../include/constants.h \ > > ../../include/conv.com ../../include/stream.com \ > > ../../include/chorin.com ../../include/scalar.com \ > > ../../include/flags.com ../../include/turb.com \ > > ../../include/bnd.com ../../include/io.com \ > > ../../include/verbose.com > > readat.o: ../../include/constants.h ../../include/bnd.h ../../include/paramt.h \ > > ../../include/io.com ../../include/nloc.com ../../include/flags.com \ > > ../../include/stream.com > > rhsbc1.o: ../../include/paramt.h ../../include/constants.h \ > > ../../include/iset.com > > rhsbc4.o: ../../include/paramt.h ../../include/constants.h \ > > ../../include/iset.com > > rhsbc5.o: ../../include/paramt.h ../../include/iset.com \ > > ../../include/constants.h > > sclsch.o: ../../include/flags.com > > solzne.o: ../../include/io.com > > turbcomp.o: ../../include/paramt.h ../../include/constants.h ../../include/nloc.com \ > > ../../include/three.com ../../include/flags.com ../../include/turb.com \ > > ../../include/trip.com ../../include/visco.com \ > > ../../include/nboun.com ../../include/implicit.h ../../include/io.com > > update2.o: ../../include/constants.h ../../include/paramt.h \ > > ../../include/conv.com ../../include/nloc.com ../../include/verbose.com \ > > ../../include/implicit.h ../../include/iset.com \ > > ../../include/flags.com ../../include/io.com > > update3.o: ../../include/constants.h ../../include/paramt.h \ > > ../../include/implicit.h ../../include/conv.com ../../include/nloc.com \ > > ../../include/verbose.com ../../include/iset.com \ > > ../../include/flags.com ../../include/io.com > > update4.o: ../../include/constants.h ../../include/paramt.h \ > > ../../include/conv.com ../../include/nboun.com ../../include/nloc.com \ > > ../../include/verbose.com ../../include/implicit.h ../../include/io.com > > weakbc.o: ../../include/paramt.h ../../include/constants.h ../../include/bnd.h \ > > ../../include/bnd.com ../../include/three.com ../../include/nloc.com \ > > ../../include/implicit.h rotaterhs.f rotaterhs2.f > From jed at jedbrown.org Fri Apr 24 11:09:16 2020 From: jed at jedbrown.org (Jed Brown) Date: Fri, 24 Apr 2020 10:09:16 -0600 Subject: [petsc-users] makefile changes since release 12 In-Reply-To: References: <87eesd55vh.fsf@jedbrown.org> Message-ID: <875zdo6f2r.fsf@jedbrown.org> Satish Balay writes: >> Crossing over to dev discussion, I wish we could make these names more >> standard, like the following predefined ones (see `make -p`). >> >> COMPILE.F = $(FC) $(FFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c >> %.o: %.F >> # recipe to execute (built-in): >> $(COMPILE.F) $(OUTPUT_OPTION) $< >> LINK.F = $(FC) $(FFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) >> %: %.F >> # recipe to execute (built-in): >> $(LINK.F) $^ $(LOADLIBES) $(LDLIBS) -o $@ >> >> >> So we could define PETSC_COMPILE.F, PETSC_LINK.F, etc., and a user could write >> >> COMPILE.F = PETSC_COMPILE.F >> >> if all their *.F sources use PETSc, in which case all the standard rules >> would work. Or, they could use PETSc-enabled build only for a subset of >> their sources: >> >> $(SOURCES_USING_PETSC:%.F=%.o) : %.o : %.F >> $(PETSC_COMPILE.F) $(OUTPUT_OPTION) $< > > Hm - I thought we were going to leave the current rules file to be portable - and have gnumakefile syntax available via share/petsc/Makefile.user I'm suggesting that a user who has some source files that need to be compiled with PETSc (and some that should not use PETSc flags) could use this. What I'm not a fan of is having so many different conventions. I'm fine with promoting share/petsc/Makefile.user and treating this as legacy. lib/petsc/conf/variables has this. It isn't used, but is (I think) GNU-specific. PSOURCEC = $(SOURCEC:%=`pwd`/%) PSOURCECXX= $(SOURCECXX:%=`pwd`/%) PSOURCECU = $(SOURCECU:%=`pwd`/%) From stefano.zampini at gmail.com Fri Apr 24 15:38:38 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Fri, 24 Apr 2020 23:38:38 +0300 Subject: [petsc-users] CUDA error In-Reply-To: References: <9A8B2DBF-9611-445B-9A29-E0B6825C5725@gmail.com> Message-ID: Il giorno mer 15 apr 2020 alle ore 22:14 Mark Adams ha scritto: > > Lots of these: > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in > /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ > veccuda2.cu > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in > /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ > veccuda2.cu > [ 0]32 bytes VecCUDAAllocateCheck() line 34 in > /autofs/nccs-svm1_home1/adams/petsc/src/vec/vec/impls/seq/seqcuda/ > veccuda2.cu > > > Mark can you cherry-pick this commit https://gitlab.com/petsc/petsc/-/merge_requests/2712/diffs?commit_id=20ff9baf76cac672a639b6635bba1ad11ddf4a11 and see if the leaks are still there? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Fri Apr 24 17:31:58 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 24 Apr 2020 15:31:58 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> Message-ID: <4035EB5E-DA6B-4995-8710-B062C8663D41@gmail.com> Hi Junchao, I tested by commenting out the AOApplicationToPetsc calls as you suggest, but it doesn?t work because it doesn?t maintain the proper order of the elements in the scattered vectors. I attach a modified version of the test code where I put elements into the global vector, then carry out the scatter, and check on the subcomms that they are correct. You can see everything is fine with the AOApplicationToPetsc calls, but the comparison fails when those are commented out. If there is some way I can achieve the right VecScatters without those calls, I would be happy to know how to do that. Thank you again for your help. Randy ps. I suggest you run this test with nx=ny=nz=10 and only a couple subcomms and maybe 4 processes to demonstrate the behavior > On Apr 20, 2020, at 2:45 PM, Junchao Zhang wrote: > > Hello, Randy, > I further looked at the problem and believe it was due to overwhelming traffic. The code sometimes fails at MPI_Waitall. I printed out MPI error strings of bad MPI Statuses. One of them is like "MPID_nem_tcp_connpoll(1845): Communication error with rank 25: Connection reset by peer", which is a tcp error and has nothing to do with petsc. > Further investigation shows in the case of 5120 ranks with 320 sub communicators, during VecScatterSetUp, each rank has around 640 isends/irecvs neighbors, and quite a few ranks has 1280 isends neighbors. I guess these overwhelming isends occasionally crashed the connection. > The piece of code in VecScatterSetUp is to calculate the communication pattern. With index sets "having good locality", the calculate itself incurs less traffic. Here good locality means indices in an index set mostly point to local entries. However, the AOApplicationToPetsc() call in your code unnecessarily ruined the good petsc ordering. If we remove AOApplicationToPetsc() (the vecscatter result is still correct) , then each rank uniformly has around 320 isends/irecvs. > So, test with this modification and see if it really works in your environment. If not applicable, we can provide options in petsc to carry out the communication in phases to avoid flooding the network (though it is better done by MPI). > > Thanks. > --Junchao Zhang > > > On Fri, Apr 17, 2020 at 10:47 AM Randall Mackie > wrote: > Hi Junchao, > > Thank you for your efforts. > We tried petsc-3.13.0 but it made no difference. > We think now the issue are with sysctl parameters, and increasing those seemed to have cleared up the problem. > This also most likely explains how different clusters had different behaviors with our test code. > > We are now running our code and will report back once we are sure that there are no further issues. > > Thanks again for your help. > > Randy M. > >> On Apr 17, 2020, at 8:09 AM, Junchao Zhang > wrote: >> >> >> >> >> On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang > wrote: >> Randy, >> I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also found the error went away with petsc-3.13. However, I have not figured out what is the bug and which commit fixed it :). >> So at your side, it is better to use the latest petsc. >> I want to add that even with petsc-3.12.4 the error is random. I was only able to reproduce the error once, so I can not claim petsc-3.13 actually fixed it (or, the bug is really in petsc). >> >> --Junchao Zhang >> >> >> On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang > wrote: >> Randy, >> Up to now I could not reproduce your error, even with the biggest mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 >> While I continue doing test, you can try other options. It looks you want to duplicate a vector to subcomms. I don't think you need the two lines: >> call AOApplicationToPetsc(aoParent,nis,ind1,ierr) >> call AOApplicationToPetsc(aoSub,nis,ind2,ierr) >> In addition, you can use simpler and more memory-efficient index sets. There is a petsc example for this task, see case 3 in https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c >> BTW, it is good to use petsc master so we are on the same page. >> --Junchao Zhang >> >> >> On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie > wrote: >> Hi Junchao, >> >> So I was able to create a small test code that duplicates the issue we have been having, and it is attached to this email in a zip file. >> Included is the test.F90 code, the commands to duplicate crash and to duplicate a successful run, output errors, and our petsc configuration. >> >> Our findings to date include: >> >> The error is reproducible in a very short time with this script >> It is related to nproc*nsubs and (although to a less extent) to DM grid size >> It happens regardless of MPI implementation (mpich, intel mpi 2018, 2019, openmpi) or compiler (gfortran/gcc , intel 2018) >> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to slightly increase the limit, but still fails on the full machine set. >> Nothing looks interesting on valgrind >> >> Our initial tests were carried out on an Azure cluster, but we also tested on our smaller cluster, and we found the following: >> >> Works: >> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >> >> Crashes (this works on Azure) >> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >> >> So it looks like it may also be related to the physical number of nodes as well. >> >> In any case, even with 2560 processes on 192 cores the memory does not go above 3.5 Gbyes so you don?t need a huge cluster to test. >> >> Thanks, >> >> Randy M. >> >> >> >>> On Apr 14, 2020, at 12:23 PM, Junchao Zhang > wrote: >>> >>> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I doubted it was the problem. Even if users configure petsc with 64-bit indices, we use PetscMPIInt in MPI calls. So it is not a problem. >>> Try -vecscatter_type mpi1 to restore to the original VecScatter implementation. If the problem still remains, could you provide a test example for me to debug? >>> >>> --Junchao Zhang >>> >>> >>> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie > wrote: >>> Hi Junchao, >>> >>> We have tried your two suggestions but the problem remains. >>> And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce. >>> >>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the problem must be elsewhere and not MPI. >>> >>> Give that this is a 64 bit indices build of PETSc, is there some possible incompatibility between PETSc and MPI calls? >>> >>> We are open to any other possible suggestions to try as other than valgrind on thousands of processes we seem to have run out of ideas. >>> >>> Thanks, Randy M. >>> >>>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang > wrote: >>>> >>>> >>>> --Junchao Zhang >>>> >>>> >>>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang > wrote: >>>> Randy, >>>> Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr >>>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>>> But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. >>>> Thanks. >>>> --Junchao Zhang >>>> >>>> >>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: >>>> Dear PETSc users, >>>> >>>> We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. >>>> This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. >>>> >>>> The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: >>>> >>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>> >>>> >>>> Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: >>>> >>>> >>>> /* Post the Isends with the message length-info */ >>>> for (i=0,j=0; i>>> if (ilengths[i]) { >>>> ierr = MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>> j++; >>>> } >>>> } >>>> >>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the same problem. >>>> >>>> We suspect there is some limit being set on this cloud cluster on the number of file connections or something, but we don?t know. >>>> >>>> Anyone have any ideas? We are sort of grasping for straws at this point. >>>> >>>> Thanks, Randy M. >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test1.F90 Type: application/octet-stream Size: 7798 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sat Apr 25 07:17:40 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sat, 25 Apr 2020 09:17:40 -0300 Subject: [petsc-users] Reuse compiled packages Message-ID: Hi all, While compiling 3.13.0 I have previously run $ ./configure ... --download-mumps --download-scalapack ... and now I have $ ls arch-linux2-c-opt/externalpackages/ fblaslapack-3.4.2/ petsc-pkg-fblaslapack-e8a03f57d64c/ petsc-pkg-mumps-d1a5c931b762/ petsc-pkg-scalapack-3ba8f741b828/ Are there any flags for configure that let me reuse the compiled versions I have? I did not try separate --with-mumps-include and --with-mumps-lib, as I think there should be a more rational approach All the combinations I tried failed ( stands for the current dir): 1. --with-packages-search-path= --with-mumps (what I would like best, as it would allow for a single search path for all external packages already compiled) 2. --with-packages-search-path=/arch-linux2-c-opt --with-mumps 3. --with-packages-search-path=/arch-linux2-c-opt/externalpackages --with-mumps 4. --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 --with-mumps 5. --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 --with-mumps-dir=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 --with-mumps Note that the directory is well written. If that weren't the case, --with-packages-search-path would simply not find it, but --with-mumps-dir would complain. Thanks in advance, Santiago -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sat Apr 25 07:23:48 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sat, 25 Apr 2020 09:23:48 -0300 Subject: [petsc-users] Passing configuration options for external package compilation Message-ID: Hi all, When using $ ./configure ... --download-mumps --download-scalapack ... is there a way to pass configuration options for the compilation of mumps, scalapack, etc.? Otherwise, I would have to first compile those packages, and then tell PETSc configure where to find them. I am having trouble with that, as asked in https://lists.mcs.anl.gov/pipermail/petsc-users/2020-April/040945.html Thanks, Santiago -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sat Apr 25 07:34:05 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sat, 25 Apr 2020 09:34:05 -0300 Subject: [petsc-users] Reuse compiled packages In-Reply-To: References: Message-ID: PS: In cases 1-4, configure keeps finding the precompiled package of mumps, ver 5.1.2 $ dpkg -l | grep mumps ii libmumps-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - parallel shared libraries ii libmumps-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - parallel development files ii libmumps-ptscotch-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - PTScotch-version shared libraries ii libmumps-ptscotch-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - PTScotch-version development files ii libmumps-scotch-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - Scotch-version shared libraries ii libmumps-scotch-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - Scotch-version development files ii libmumps-seq-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - non-parallel shared libraries ii libmumps-seq-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - non-parallel development files and output of configure: ... MUMPS: Version: 5.1.2 Library: -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord scalapack: Library: -lscalapack ... In case 5, I get an error =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: configureLibrary from config.packages.MUMPS(config/BuildSystem/config/package.py:868) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Bad option: --with-mumps-dir=/home/santiago/Documents/installers/petsc/petsc-3.13.0/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 /home/santiago/Documents/installers/petsc/petsc-3.13.0/arch-linux2-c-opt/externalpackages is reserved for --download-package scratch space. Do not install software in this location nor use software in this directory. ******************************************************************************* On Sat, Apr 25, 2020 at 9:17 AM wrote: > Hi all, > > While compiling 3.13.0 I have previously run > > $ ./configure ... --download-mumps --download-scalapack ... > > and now I have > > $ ls arch-linux2-c-opt/externalpackages/ > fblaslapack-3.4.2/ petsc-pkg-fblaslapack-e8a03f57d64c/ > petsc-pkg-mumps-d1a5c931b762/ petsc-pkg-scalapack-3ba8f741b828/ > > Are there any flags for configure that let me reuse the compiled versions > I have? > I did not try separate --with-mumps-include and --with-mumps-lib, as I > think there should be a more rational approach > All the combinations I tried failed ( stands for the current dir): > > 1. > --with-packages-search-path= --with-mumps > (what I would like best, as it would allow for a single search path for > all external packages already compiled) > > 2. > --with-packages-search-path=/arch-linux2-c-opt --with-mumps > > 3. > --with-packages-search-path=/arch-linux2-c-opt/externalpackages > --with-mumps > > 4. > --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > --with-mumps > > 5. > --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > --with-mumps-dir=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > --with-mumps > > Note that the directory is well written. If that weren't the case, > --with-packages-search-path would simply not find it, but --with-mumps-dir > would complain. > > Thanks in advance, > Santiago > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sat Apr 25 08:04:56 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sat, 25 Apr 2020 10:04:56 -0300 Subject: [petsc-users] Configure with precompiled mumps and other packages Message-ID: Hi, I am trying to compile PETSc, using the precompiled Mumps in Ubuntu. The available Mumps version is 5.1.2, so I use PETSc 3.11 (for 3.13 I would require mumps 5.2.1, not available as a precompiled package). The packages are: $ dpkg -l | grep mumps ii libmumps-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - parallel shared libraries ii libmumps-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - parallel development files ii libmumps-ptscotch-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - PTScotch-version shared libraries ii libmumps-ptscotch-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - PTScotch-version development files ii libmumps-scotch-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - Scotch-version shared libraries ii libmumps-scotch-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - Scotch-version development files ii libmumps-seq-5.1.2:amd64 5.1.2-4 amd64 Direct linear systems solver - non-parallel shared libraries ii libmumps-seq-dev:amd64 5.1.2-4 amd64 Direct linear systems solver - non-parallel development files So I configure with $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 --with-shared-libraries --with-packages-download-dir=/home/santiago/Documents/installers/petsc --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 COPTFLAGS='-O -O3 -march=native -mtune=native' FOPTFLAGS='-O -O3 -march=native -mtune=native' CXXOPTFLAGS='-O -O3 -march=native -mtune=native' --force works fine. But $ make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" test Running test examples to verify correct installation Using PETSC_DIR=/home/santiago/usr/local and PETSC_ARCH= C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes See http://www.mcs.anl.gov/petsc/documentation/faq.html lid velocity = 0.0016, prandtl # = 1., grashof # = 1. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Signal received [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.11.4, Sep, 28, 2019 [0]PETSC ERROR: ./ex19 on a named isaiasPrecision-7820 by santiago Sat Apr 25 09:52:01 2020 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [1]PETSC ERROR: to get more information on the crash. [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Signal received [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.11.4, Sep, 28, 2019 [1]PETSC ERROR: ./ex19 on a named isaiasPrecision-7820 by santiago Sat Apr 25 09:52:01 2020 [1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 --with-shared-libraries --with-packages-download-dir=/home/santiago/Documents/installers/petsc --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 COPTFLAGS="-O -O3 -march=native -mtune=native" FOPTFLAGS="-O -O3 -march=native -mtune=native" CXXOPTFLAGS="-O -O3 -march=native -mtune=native" --force [1]PETSC ERROR: #1 User provided function() line 0 in unknown file [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 --with-shared-libraries --with-packages-download-dir=/home/santiago/Documents/installers/petsc --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 COPTFLAGS="-O -O3 -march=native -mtune=native" FOPTFLAGS="-O -O3 -march=native -mtune=native" CXXOPTFLAGS="-O -O3 -march=native -mtune=native" --force [0]PETSC ERROR: #1 User provided function() line 0 in unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [isaiasPrecision-7820:09935] 1 more process has sent help message help-mpi-api.txt / mpi-abort [isaiasPrecision-7820:09935] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages fails. As mentioned in https://www.mcs.anl.gov/petsc/documentation/faq.html#PetscOptionsInsertFile (even if not the same error) I checked $ ping `hostname` It works fine. As a reference, a PETSc version that is compiled with ... --download-mumps --download-scalapack ... works fine. How can I compile and successfully check PETSc, using the precompiled Mumps in Ubuntu? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Apr 25 09:54:18 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 25 Apr 2020 09:54:18 -0500 (CDT) Subject: [petsc-users] Reuse compiled packages In-Reply-To: References: Message-ID: On Sat, 25 Apr 2020, san.temporal at gmail.com wrote: > Hi all, > > While compiling 3.13.0 I have previously run > > $ ./configure ... --download-mumps --download-scalapack ... > > and now I have > > $ ls arch-linux2-c-opt/externalpackages/ > fblaslapack-3.4.2/ petsc-pkg-fblaslapack-e8a03f57d64c/ > petsc-pkg-mumps-d1a5c931b762/ petsc-pkg-scalapack-3ba8f741b828/ > > Are there any flags for configure that let me reuse the compiled versions I > have? By default - if you use the same PETSC_ARCH and same download options - the installed packages won't get rebuilt. However - if you want to have a single install of externalpackages shared across multiple petsc builds - its best to install externalpackages in a prefix location. And then use this for subsequent builds. ./configure PETSC_ARCH=arch-pkgs --prefix=$HOME/petsc-pkgs --download-mumps --download-scalapack ./configure PETSC__ARCH=arch-use-pkgs --with-mumps-dir=$HOME/petsc-pkgs --with-scalapack-dir=$HOME/petsc-pkgs --with-packages-search-path options hardly ever gets used - so will have to check if its broken or not. Satish > I did not try separate --with-mumps-include and --with-mumps-lib, as I > think there should be a more rational approach > All the combinations I tried failed ( stands for the current dir): > > 1. > --with-packages-search-path= --with-mumps > (what I would like best, as it would allow for a single search path for all > external packages already compiled) > > 2. > --with-packages-search-path=/arch-linux2-c-opt --with-mumps > > 3. > --with-packages-search-path=/arch-linux2-c-opt/externalpackages > --with-mumps > > 4. > --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > --with-mumps > > 5. > --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > --with-mumps-dir=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > --with-mumps > > Note that the directory is well written. If that weren't the case, > --with-packages-search-path would simply not find it, but --with-mumps-dir > would complain. > > Thanks in advance, > Santiago > From balay at mcs.anl.gov Sat Apr 25 09:59:20 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 25 Apr 2020 09:59:20 -0500 (CDT) Subject: [petsc-users] Passing configuration options for external package compilation In-Reply-To: References: Message-ID: On Sat, 25 Apr 2020, san.temporal at gmail.com wrote: > Hi all, > > When using > > $ ./configure ... --download-mumps --download-scalapack ... > > is there a way to pass configuration options for the compilation of mumps, > scalapack, etc.? MUMPS, scalapack don't use configure as part of the build. [or if it exists - petsc does not use this interface to build these packages]. MUMPs is built via Makefile.inc interface. Other packages do have configure - for ex: MPICH - where you can use: --download-mpich-configure-arguments=string Additional GNU autoconf configure arguments for the build of MPICH current: 0 Satish > > Otherwise, I would have to first compile those packages, and then tell > PETSc configure where to find them. > I am having trouble with that, as asked in > https://lists.mcs.anl.gov/pipermail/petsc-users/2020-April/040945.html > > Thanks, > Santiago > From balay at mcs.anl.gov Sat Apr 25 10:05:29 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 25 Apr 2020 10:05:29 -0500 (CDT) Subject: [petsc-users] Configure with precompiled mumps and other packages In-Reply-To: References: Message-ID: I have no idea whats different with UBUNTU MUMPS. You can try debugging it as the error message suggests. My suggestion is to stick with petsc-3.13 and --download-mumps. You can use ubuntu packages for software where API is more consistent - like blas/lapack, MPICH/OpenMPI Satish On Sat, 25 Apr 2020, san.temporal at gmail.com wrote: > Hi, > > I am trying to compile PETSc, using the precompiled Mumps in Ubuntu. The > available Mumps version is 5.1.2, so I use PETSc 3.11 (for 3.13 I would > require mumps 5.2.1, not available as a precompiled package). > > The packages are: > > $ dpkg -l | grep mumps > ii libmumps-5.1.2:amd64 5.1.2-4 > amd64 Direct linear systems solver - parallel shared > libraries > ii libmumps-dev:amd64 5.1.2-4 > amd64 Direct linear systems solver - parallel > development files > ii libmumps-ptscotch-5.1.2:amd64 5.1.2-4 > amd64 Direct linear systems solver - PTScotch-version > shared libraries > ii libmumps-ptscotch-dev:amd64 5.1.2-4 > amd64 Direct linear systems solver - PTScotch-version > development files > ii libmumps-scotch-5.1.2:amd64 5.1.2-4 > amd64 Direct linear systems solver - Scotch-version > shared libraries > ii libmumps-scotch-dev:amd64 5.1.2-4 > amd64 Direct linear systems solver - Scotch-version > development files > ii libmumps-seq-5.1.2:amd64 5.1.2-4 > amd64 Direct linear systems solver - non-parallel > shared libraries > ii libmumps-seq-dev:amd64 5.1.2-4 > amd64 Direct linear systems solver - non-parallel > development files > > So I configure with > > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > --prefix=/home/santiago/usr/local --with-make-np=10 > --with-shared-libraries > --with-packages-download-dir=/home/santiago/Documents/installers/petsc > --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 > COPTFLAGS='-O -O3 -march=native -mtune=native' FOPTFLAGS='-O -O3 > -march=native -mtune=native' CXXOPTFLAGS='-O -O3 -march=native > -mtune=native' --force > > works fine. But > > $ make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" test > Running test examples to verify correct installation > Using PETSC_DIR=/home/santiago/usr/local and PETSC_ARCH= > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 > MPI process > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 > MPI processes > See http://www.mcs.anl.gov/petsc/documentation/faq.html > lid velocity = 0.0016, prandtl # = 1., grashof # = 1. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS X to find memory corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, > and run > [0]PETSC ERROR: to get more information on the crash. > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.11.4, Sep, 28, 2019 > [0]PETSC ERROR: ./ex19 on a named isaiasPrecision-7820 by santiago Sat > Apr 25 09:52:01 2020 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS X to find memory corruption errors > [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, > and run > [1]PETSC ERROR: to get more information on the crash. > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Signal received > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.11.4, Sep, 28, 2019 > [1]PETSC ERROR: ./ex19 on a named isaiasPrecision-7820 by santiago Sat > Apr 25 09:52:01 2020 > [1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 > -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 > --with-shared-libraries > --with-packages-download-dir=/home/santiago/Documents/installers/petsc > --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 > COPTFLAGS="-O -O3 -march=native -mtune=native" FOPTFLAGS="-O -O3 > -march=native -mtune=native" CXXOPTFLAGS="-O -O3 -march=native > -mtune=native" --force > [1]PETSC ERROR: #1 User provided function() line 0 in unknown file > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 > -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 > --with-shared-libraries > --with-packages-download-dir=/home/santiago/Documents/installers/petsc > --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 > COPTFLAGS="-O -O3 -march=native -mtune=native" FOPTFLAGS="-O -O3 > -march=native -mtune=native" CXXOPTFLAGS="-O -O3 -march=native > -mtune=native" --force > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > > -------------------------------------------------------------------------- > [isaiasPrecision-7820:09935] 1 more process has sent help message > help-mpi-api.txt / mpi-abort > [isaiasPrecision-7820:09935] Set MCA parameter > "orte_base_help_aggregate" to 0 to see all help / error messages > > fails. As mentioned in > https://www.mcs.anl.gov/petsc/documentation/faq.html#PetscOptionsInsertFile > (even if not the same error) I checked > > $ ping `hostname` > > It works fine. > As a reference, a PETSc version that is compiled with ... --download-mumps > --download-scalapack ... works fine. > > How can I compile and successfully check PETSc, using the precompiled Mumps > in Ubuntu? > > Thanks > From san.temporal at gmail.com Sun Apr 26 06:58:44 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sun, 26 Apr 2020 08:58:44 -0300 Subject: [petsc-users] Configure with precompiled mumps and other packages In-Reply-To: References: Message-ID: Satish, Thanks. Santiago On Sat, Apr 25, 2020 at 12:05 PM Satish Balay wrote: > I have no idea whats different with UBUNTU MUMPS. > > You can try debugging it as the error message suggests. > > My suggestion is to stick with petsc-3.13 and --download-mumps. > > You can use ubuntu packages for software where API is more consistent - > like blas/lapack, MPICH/OpenMPI > > Satish > > On Sat, 25 Apr 2020, san.temporal at gmail.com wrote: > > > Hi, > > > > I am trying to compile PETSc, using the precompiled Mumps in Ubuntu. The > > available Mumps version is 5.1.2, so I use PETSc 3.11 (for 3.13 I would > > require mumps 5.2.1, not available as a precompiled package). > > > > The packages are: > > > > $ dpkg -l | grep mumps > > ii libmumps-5.1.2:amd64 5.1.2-4 > > amd64 Direct linear systems solver - parallel shared > > libraries > > ii libmumps-dev:amd64 5.1.2-4 > > amd64 Direct linear systems solver - parallel > > development files > > ii libmumps-ptscotch-5.1.2:amd64 5.1.2-4 > > amd64 Direct linear systems solver - > PTScotch-version > > shared libraries > > ii libmumps-ptscotch-dev:amd64 5.1.2-4 > > amd64 Direct linear systems solver - > PTScotch-version > > development files > > ii libmumps-scotch-5.1.2:amd64 5.1.2-4 > > amd64 Direct linear systems solver - Scotch-version > > shared libraries > > ii libmumps-scotch-dev:amd64 5.1.2-4 > > amd64 Direct linear systems solver - Scotch-version > > development files > > ii libmumps-seq-5.1.2:amd64 5.1.2-4 > > amd64 Direct linear systems solver - non-parallel > > shared libraries > > ii libmumps-seq-dev:amd64 5.1.2-4 > > amd64 Direct linear systems solver - non-parallel > > development files > > > > So I configure with > > > > $ ./configure --with-cc=mpicc --with-fc=mpif90 -with-cxx=mpicxx > > --prefix=/home/santiago/usr/local --with-make-np=10 > > --with-shared-libraries > > --with-packages-download-dir=/home/santiago/Documents/installers/petsc > > --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 > > COPTFLAGS='-O -O3 -march=native -mtune=native' FOPTFLAGS='-O -O3 > > -march=native -mtune=native' CXXOPTFLAGS='-O -O3 -march=native > > -mtune=native' --force > > > > works fine. But > > > > $ make PETSC_DIR=/home/santiago/usr/local PETSC_ARCH="" test > > Running test examples to verify correct installation > > Using PETSC_DIR=/home/santiago/usr/local and PETSC_ARCH= > > C/C++ example src/snes/examples/tutorials/ex19 run successfully with > 1 > > MPI process > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 > > MPI processes > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > lid velocity = 0.0016, prandtl # = 1., grashof # = 1. > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > probably memory access out of range > > [0]PETSC ERROR: Try option -start_in_debugger or > > -on_error_attach_debugger > > [0]PETSC ERROR: or see > > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple > Mac > > OS X to find memory corruption errors > > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, > link, > > and run > > [0]PETSC ERROR: to get more information on the crash. > > [0]PETSC ERROR: --------------------- Error Message > > -------------------------------------------------------------- > > [0]PETSC ERROR: Signal received > > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html > > for trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.11.4, Sep, 28, 2019 > > [0]PETSC ERROR: ./ex19 on a named isaiasPrecision-7820 by santiago > Sat > > Apr 25 09:52:01 2020 > > [1]PETSC ERROR: > > ------------------------------------------------------------------------ > > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > probably memory access out of range > > [1]PETSC ERROR: Try option -start_in_debugger or > > -on_error_attach_debugger > > [1]PETSC ERROR: or see > > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple > Mac > > OS X to find memory corruption errors > > [1]PETSC ERROR: configure using --with-debugging=yes, recompile, > link, > > and run > > [1]PETSC ERROR: to get more information on the crash. > > [1]PETSC ERROR: --------------------- Error Message > > -------------------------------------------------------------- > > [1]PETSC ERROR: Signal received > > [1]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html > > for trouble shooting. > > [1]PETSC ERROR: Petsc Release Version 3.11.4, Sep, 28, 2019 > > [1]PETSC ERROR: ./ex19 on a named isaiasPrecision-7820 by santiago > Sat > > Apr 25 09:52:01 2020 > > [1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 > > -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 > > --with-shared-libraries > > --with-packages-download-dir=/home/santiago/Documents/installers/petsc > > --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 > > COPTFLAGS="-O -O3 -march=native -mtune=native" FOPTFLAGS="-O -O3 > > -march=native -mtune=native" CXXOPTFLAGS="-O -O3 -march=native > > -mtune=native" --force > > [1]PETSC ERROR: #1 User provided function() line 0 in unknown file > > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 > > -with-cxx=mpicxx --prefix=/home/santiago/usr/local --with-make-np=10 > > --with-shared-libraries > > --with-packages-download-dir=/home/santiago/Documents/installers/petsc > > --download-fblaslapack --with-mumps --with-scalapack --with-debugging=0 > > COPTFLAGS="-O -O3 -march=native -mtune=native" FOPTFLAGS="-O -O3 > > -march=native -mtune=native" CXXOPTFLAGS="-O -O3 -march=native > > -mtune=native" --force > > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > > > > -------------------------------------------------------------------------- > > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > > with errorcode 59. > > > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > > You may or may not see output from other processes, depending on > > exactly when Open MPI kills them. > > > > > -------------------------------------------------------------------------- > > [isaiasPrecision-7820:09935] 1 more process has sent help message > > help-mpi-api.txt / mpi-abort > > [isaiasPrecision-7820:09935] Set MCA parameter > > "orte_base_help_aggregate" to 0 to see all help / error messages > > > > fails. As mentioned in > > > https://www.mcs.anl.gov/petsc/documentation/faq.html#PetscOptionsInsertFile > > (even if not the same error) I checked > > > > $ ping `hostname` > > > > It works fine. > > As a reference, a PETSc version that is compiled with ... > --download-mumps > > --download-scalapack ... works fine. > > > > How can I compile and successfully check PETSc, using the precompiled > Mumps > > in Ubuntu? > > > > Thanks > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From san.temporal at gmail.com Sun Apr 26 10:34:00 2020 From: san.temporal at gmail.com (san.temporal at gmail.com) Date: Sun, 26 Apr 2020 12:34:00 -0300 Subject: [petsc-users] Reuse compiled packages In-Reply-To: References: Message-ID: So the point is I was relying on a flag that is possibly non-working. I confirm that, given the line that I had previously used ./configure PETSC_ARCH=arch-pkgs --prefix=$HOME/petsc-pkgs ... --download-fblaslapack --download-mumps --download-scalapack ... (I didn't need to do this again), I could reuse my compiled packages with ./configure PETSC_ARCH=arch-pkgs --prefix=$HOME/petsc-pkgs ... --with-fblaslapack-dir=/home/santiago/usr/local --with-mumps-dir=$HOME/petsc-pkgs --with-scalapack-dir=$HOME/petsc-pkgs ... and it worked fine. (I assume in your second ./configure command you meant PETSC_ARCH=arch-pkgs, as I used). This used previously compiled versions of fblaslapack, mumps and scalapack: ... BlasLapack: Library: -Wl,-rpath,/home/santiago/usr/local/lib -L/home/santiago/usr/local/lib -lflapack -lfblas uses 4 byte integers MPI: Version: 3 Mpiexec: mpiexec OMPI_VERSION: 2.1.1 pthread: fblaslapack: cmake: Version: 3.10.2 /usr/bin/cmake X: Library: -lX11 regex: MUMPS: Version: 5.2.1 Includes: -I/home/santiago/usr/local/include Library: -Wl,-rpath,/home/santiago/usr/local/lib -L/home/santiago/usr/local/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_co mmon -lpord scalapack: Library: -Wl,-rpath,/home/santiago/usr/local/lib -L/home/santiago/usr/local/lib -lscalapack ... I guess this can also be used to compile fblaslapack, mumps, scalapack (all the same versions as I am working with now), with whatever options I want, and then use these to configure PETSc. Thanks. Santiago On Sat, Apr 25, 2020 at 11:54 AM Satish Balay wrote: > On Sat, 25 Apr 2020, san.temporal at gmail.com wrote: > > > Hi all, > > > > While compiling 3.13.0 I have previously run > > > > $ ./configure ... --download-mumps --download-scalapack ... > > > > and now I have > > > > $ ls arch-linux2-c-opt/externalpackages/ > > fblaslapack-3.4.2/ petsc-pkg-fblaslapack-e8a03f57d64c/ > > petsc-pkg-mumps-d1a5c931b762/ petsc-pkg-scalapack-3ba8f741b828/ > > > > Are there any flags for configure that let me reuse the compiled > versions I > > have? > > > By default - if you use the same PETSC_ARCH and same download options - > the installed packages won't get rebuilt. > > However - if you want to have a single install of externalpackages shared > across multiple petsc builds - its best > to install externalpackages in a prefix location. And then use this for > subsequent builds. > > ./configure PETSC_ARCH=arch-pkgs --prefix=$HOME/petsc-pkgs > --download-mumps --download-scalapack > > ./configure PETSC__ARCH=arch-use-pkgs --with-mumps-dir=$HOME/petsc-pkgs > --with-scalapack-dir=$HOME/petsc-pkgs > > > --with-packages-search-path options hardly ever gets used - so will have > to check if its broken or not. > > Satish > > > > I did not try separate --with-mumps-include and --with-mumps-lib, as I > > think there should be a more rational approach > > All the combinations I tried failed ( stands for the current dir): > > > > 1. > > --with-packages-search-path= --with-mumps > > (what I would like best, as it would allow for a single search path for > all > > external packages already compiled) > > > > 2. > > --with-packages-search-path=/arch-linux2-c-opt --with-mumps > > > > 3. > > --with-packages-search-path=/arch-linux2-c-opt/externalpackages > > --with-mumps > > > > 4. > > > --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > > --with-mumps > > > > 5. > > > --with-packages-search-path=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > > > --with-mumps-dir=/arch-linux2-c-opt/externalpackages/petsc-pkg-mumps-d1a5c931b762 > > --with-mumps > > > > Note that the directory is well written. If that weren't the case, > > --with-packages-search-path would simply not find it, but > --with-mumps-dir > > would complain. > > > > Thanks in advance, > > Santiago > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sun Apr 26 12:48:44 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 26 Apr 2020 12:48:44 -0500 (CDT) Subject: [petsc-users] Reuse compiled packages In-Reply-To: References: Message-ID: On Sun, 26 Apr 2020, san.temporal at gmail.com wrote: > So the point is I was relying on a flag that is possibly non-working. > I confirm that, given the line that I had previously used > > ./configure PETSC_ARCH=arch-pkgs --prefix=$HOME/petsc-pkgs ... > --download-fblaslapack --download-mumps --download-scalapack ... > > (I didn't need to do this again), I could reuse my compiled packages with > > ./configure PETSC_ARCH=arch-pkgs --prefix=$HOME/petsc-pkgs ... > --with-fblaslapack-dir=/home/santiago/usr/local The option here is --with-blaslapack-dir > --with-mumps-dir=$HOME/petsc-pkgs --with-scalapack-dir=$HOME/petsc-pkgs ... > > and it worked fine. (I assume in your second ./configure command you meant > PETSC_ARCH=arch-pkgs, as I used). Nope - its best not to reuse build files across different builds. Hence I used a different PETSC_ARCH. [If reusing same PETSC_ARCH - its best to delete previous build files] Its best to reuse PETSC_ARCH [i.e build files] when rebuilding with same/equivalent setup again. [then it will try to avoid rebuilding stuff that doesn't need rebuilding] Sure reusing (as you've done) might work most of the time - but there will be corner cases where this will break. > This used previously compiled versions of fblaslapack, mumps and scalapack: > > ... > BlasLapack: > Library: -Wl,-rpath,/home/santiago/usr/local/lib > -L/home/santiago/usr/local/lib -lflapack -lfblas > uses 4 byte integers > MPI: > Version: 3 > Mpiexec: mpiexec > OMPI_VERSION: 2.1.1 > pthread: > fblaslapack: > cmake: > Version: 3.10.2 > /usr/bin/cmake > X: > Library: -lX11 > regex: > MUMPS: > Version: 5.2.1 > Includes: -I/home/santiago/usr/local/include > Library: -Wl,-rpath,/home/santiago/usr/local/lib > -L/home/santiago/usr/local/lib -lcmumps -ldmumps -lsmumps -lzmumps > -lmumps_co mmon -lpord > scalapack: > Library: -Wl,-rpath,/home/santiago/usr/local/lib > -L/home/santiago/usr/local/lib -lscalapack I guess you've used /home/santiago/usr/local/lib instead in place of $HOME/petsc-pkgs in the above 2 steps [with --prefix in the first - and --with-mumps-dir in the second] > ... > > I guess this can also be used to compile fblaslapack, mumps, scalapack (all > the same versions as I am working with now), with whatever options I want, > and then use these to configure PETSc. Don't understand what you are trying to do here. If using PETSc to build externalpackages - its best to build the compatible versions for petsc [don't know if this same as what you refer to by "same versions as I am working with now"]. And PETSc configure builds external packages via options specified to petsc configure. [ so don't know what you mean by "whatever options I want".] Satish From Antoine.Cote3 at USherbrooke.ca Mon Apr 27 08:57:21 2020 From: Antoine.Cote3 at USherbrooke.ca (=?iso-8859-1?Q?Antoine_C=F4t=E9?=) Date: Mon, 27 Apr 2020 13:57:21 +0000 Subject: [petsc-users] Vec sizing using DMDA In-Reply-To: References: , Message-ID: Perfect, thank you very much! Best regards, Antoine ________________________________ De : Matthew Knepley Envoy? : 24 avril 2020 11:10 ? : Antoine C?t? Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] Vec sizing using DMDA On Fri, Apr 24, 2020 at 10:47 AM Antoine C?t? > wrote: Hi, Thanks for the fast response! An array of Vec would indeed solve my problem. I just don't know how to allocate it. Say I have a Vec U of the right size (created with a DMDA), and nlc = 4 load cases. How should I allocate and initialize the array? Vec *rhs; PetscInt i; ierr = PetscMalloc1(N, &rhs);CHKERRQ(ierr); for (i = 0; i < N; ++I) { ierr = DMCreateGlobalVector(dm, &rhs[i]);CHKERRQ(ierr); } /* Access vector with rhs[i] */ for (i = 0; i < N; ++I) { ierr = DMDestroyGlobalVector(dm, &rhs[i]);CHKERRQ(ierr); } ierr = PetscFree(rhs); Thanks, Matt Best regards ________________________________ De : Matthew Knepley > Envoy? : 23 avril 2020 15:23 ? : Antoine C?t? Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] Vec sizing using DMDA On Thu, Apr 23, 2020 at 12:12 PM Antoine C?t? > wrote: Hi, I'm using a C++/PETSc program to do Topological Optimization. A finite element analysis is solved at every iteration of the optimization. Displacements U are obtained using KSP solver. U is a Vec created using a 3D DMDA with 3 DOF (ux, uy, uz). Boundary conditions are stored in Vec N, and forces in Vec RHS. They also have 3 DOF, as they are created using VecDuplicate on U. My problem : I have multiple load cases (i.e. different sets of boundary conditions (b.c.) and forces). Displacements U are solved for each load case. I need to extract rapidly the b.c. and forces for each load case before solving. One way would be to change the DOF of the DMDA (e.g. for 8 load cases, we could use 3*8=24 DOF). Problem is, prior solving, we would need to loop on nodes to extract the b.c. and forces, for every node, for every load case and for every iteration of the optimization. This is a waste of time, as b.c. and forces are constant for a given load case. A better way would be to assemble b.c. and forces for every load case once, and read them afterwards as needed. This is currently done using a VecDuplicate on U to create multiple vectors N and RHS (N_0, N_1, RHS_0, RHS_1, etc.). Those vectors are hard coded, and can only solve a set number of load cases. I'm looking for a way to allocate dynamically the number of N and RHS vectors. What I would like : Given nlc, the number of load cases and nn, the number of nodes in the DMDA. Create matrices N and RHS of size (DOF*nn lines, nlc columns). While optimizing : for every load case, use N[all lines, current load case column] and RHS[all lines, current load case column], solve with KSP, obtain displacement U[all lines, current load case]. Would that be possible? Why wouldn't you just allocate an array of Vecs, since you only use one at a time? Thanks, Matt Best regards, Antoine C?t? -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Mon Apr 27 11:59:59 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Mon, 27 Apr 2020 11:59:59 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: <4035EB5E-DA6B-4995-8710-B062C8663D41@gmail.com> References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> <4035EB5E-DA6B-4995-8710-B062C8663D41@gmail.com> Message-ID: Randy, You are absolutely right. The AOApplicationToPetsc could not be removed. Since the excessive communication is inevitable, I made two changes in petsc to ease that. One is I skewed the communication to let each rank send to ranks greater than itself first. The other is an option, -max_pending_isend, to control number of pending isends. Current default is 512. I have an MR at https://gitlab.com/petsc/petsc/-/merge_requests/2757. I tested it dozens of times with your example at 5120 ranks. It worked fine. Please try it in your environment and let me know the result. Since the failure is random, you may need to run multiple times. BTW, if no objection, I'd like to add your excellent example to petsc repo. Thanks --Junchao Zhang On Fri, Apr 24, 2020 at 5:32 PM Randall Mackie wrote: > Hi Junchao, > > I tested by commenting out the AOApplicationToPetsc calls as you suggest, > but it doesn?t work because it doesn?t maintain the proper order of the > elements in the scattered vectors. > > I attach a modified version of the test code where I put elements into the > global vector, then carry out the scatter, and check on the subcomms that > they are correct. > > You can see everything is fine with the AOApplicationToPetsc calls, but > the comparison fails when those are commented out. > > If there is some way I can achieve the right VecScatters without those > calls, I would be happy to know how to do that. > > Thank you again for your help. > > Randy > > ps. I suggest you run this test with nx=ny=nz=10 and only a couple > subcomms and maybe 4 processes to demonstrate the behavior > > > On Apr 20, 2020, at 2:45 PM, Junchao Zhang > wrote: > > Hello, Randy, > I further looked at the problem and believe it was due to overwhelming > traffic. The code sometimes fails at MPI_Waitall. I printed out MPI error > strings of bad MPI Statuses. One of them is like > "MPID_nem_tcp_connpoll(1845): Communication error with rank 25: Connection > reset by peer", which is a tcp error and has nothing to do with petsc. > Further investigation shows in the case of 5120 ranks with 320 sub > communicators, during VecScatterSetUp, each rank has around 640 > isends/irecvs neighbors, and quite a few ranks has 1280 isends neighbors. I > guess these overwhelming isends occasionally crashed the connection. > The piece of code in VecScatterSetUp is to calculate the communication > pattern. With index sets "having good locality", the calculate itself > incurs less traffic. Here good locality means indices in an index set > mostly point to local entries. However, the AOApplicationToPetsc() call in > your code unnecessarily ruined the good petsc ordering. If we remove > AOApplicationToPetsc() (the vecscatter result is still correct) , then each > rank uniformly has around 320 isends/irecvs. > So, test with this modification and see if it really works in your > environment. If not applicable, we can provide options in petsc to carry > out the communication in phases to avoid flooding the network (though it is > better done by MPI). > > Thanks. > --Junchao Zhang > > > On Fri, Apr 17, 2020 at 10:47 AM Randall Mackie > wrote: > >> Hi Junchao, >> >> Thank you for your efforts. >> We tried petsc-3.13.0 but it made no difference. >> We think now the issue are with sysctl parameters, and increasing those >> seemed to have cleared up the problem. >> This also most likely explains how different clusters had different >> behaviors with our test code. >> >> We are now running our code and will report back once we are sure that >> there are no further issues. >> >> Thanks again for your help. >> >> Randy M. >> >> On Apr 17, 2020, at 8:09 AM, Junchao Zhang >> wrote: >> >> >> >> >> On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang >> wrote: >> >>> Randy, >>> I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also >>> found the error went away with petsc-3.13. However, I have not figured out >>> what is the bug and which commit fixed it :). >>> So at your side, it is better to use the latest petsc. >>> >> I want to add that even with petsc-3.12.4 the error is random. I was >> only able to reproduce the error once, so I can not claim petsc-3.13 >> actually fixed it (or, the bug is really in petsc). >> >> >>> --Junchao Zhang >>> >>> >>> On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang >>> wrote: >>> >>>> Randy, >>>> Up to now I could not reproduce your error, even with the biggest >>>> mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 >>>> While I continue doing test, you can try other options. It looks you >>>> want to duplicate a vector to subcomms. I don't think you need the two >>>> lines: >>>> >>>> call AOApplicationToPetsc(aoParent,nis,ind1,ierr) >>>> call AOApplicationToPetsc(aoSub,nis,ind2,ierr) >>>> >>>> In addition, you can use simpler and more memory-efficient index sets. >>>> There is a petsc example for this task, see case 3 in >>>> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c >>>> BTW, it is good to use petsc master so we are on the same page. >>>> --Junchao Zhang >>>> >>>> >>>> On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie >>>> wrote: >>>> >>>>> Hi Junchao, >>>>> >>>>> So I was able to create a small test code that duplicates the issue we >>>>> have been having, and it is attached to this email in a zip file. >>>>> Included is the test.F90 code, the commands to duplicate crash and to >>>>> duplicate a successful run, output errors, and our petsc configuration. >>>>> >>>>> Our findings to date include: >>>>> >>>>> The error is reproducible in a very short time with this script >>>>> It is related to nproc*nsubs and (although to a less extent) to DM >>>>> grid size >>>>> It happens regardless of MPI implementation (mpich, intel mpi 2018, >>>>> 2019, openmpi) or compiler (gfortran/gcc , intel 2018) >>>>> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to >>>>> slightly increase the limit, but still fails on the full machine set. >>>>> Nothing looks interesting on valgrind >>>>> >>>>> Our initial tests were carried out on an Azure cluster, but we also >>>>> tested on our smaller cluster, and we found the following: >>>>> >>>>> Works: >>>>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile >>>>> ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>>>> >>>>> Crashes (this works on Azure) >>>>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile >>>>> ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>>>> >>>>> So it looks like it may also be related to the physical number of >>>>> nodes as well. >>>>> >>>>> In any case, even with 2560 processes on 192 cores the memory does not >>>>> go above 3.5 Gbyes so you don?t need a huge cluster to test. >>>>> >>>>> Thanks, >>>>> >>>>> Randy M. >>>>> >>>>> >>>>> >>>>> On Apr 14, 2020, at 12:23 PM, Junchao Zhang >>>>> wrote: >>>>> >>>>> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why >>>>> I doubted it was the problem. Even if users configure petsc with 64-bit >>>>> indices, we use PetscMPIInt in MPI calls. So it is not a problem. >>>>> Try -vecscatter_type mpi1 to restore to the original VecScatter >>>>> implementation. If the problem still remains, could you provide a test >>>>> example for me to debug? >>>>> >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie >>>>> wrote: >>>>> >>>>>> Hi Junchao, >>>>>> >>>>>> We have tried your two suggestions but the problem remains. >>>>>> And the problem seems to be on the MPI_Isend line 117 in >>>>>> PetscGatherMessageLengths and not MPI_AllReduce. >>>>>> >>>>>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking >>>>>> the problem must be elsewhere and not MPI. >>>>>> >>>>>> Give that this is a 64 bit indices build of PETSc, is there some >>>>>> possible incompatibility between PETSc and MPI calls? >>>>>> >>>>>> We are open to any other possible suggestions to try as other than >>>>>> valgrind on thousands of processes we seem to have run out of ideas. >>>>>> >>>>>> Thanks, Randy M. >>>>>> >>>>>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang >>>>>> wrote: >>>>>> >>>>>> >>>>>> --Junchao Zhang >>>>>> >>>>>> >>>>>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang < >>>>>> junchao.zhang at gmail.com> wrote: >>>>>> >>>>>>> Randy, >>>>>>> Someone reported similar problem before. It turned out an Intel >>>>>>> MPI MPI_Allreduce bug. A workaround is setting the environment variable >>>>>>> I_MPI_ADJUST_ALLREDUCE=1.arr >>>>>>> >>>>>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>>>>> >>>>>>> But you mentioned mpich also had the error. So maybe the problem >>>>>>> is not the same. So let's try the workaround first. If it doesn't work, add >>>>>>> another petsc option -build_twosided allreduce, which is a workaround for >>>>>>> Intel MPI_Ibarrier bugs we met. >>>>>>> Thanks. >>>>>>> --Junchao Zhang >>>>>>> >>>>>>> >>>>>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie < >>>>>>> rlmackie862 at gmail.com> wrote: >>>>>>> >>>>>>>> Dear PETSc users, >>>>>>>> >>>>>>>> We are trying to understand an issue that has come up in running >>>>>>>> our code on a large cloud cluster with a large number of processes and >>>>>>>> subcomms. >>>>>>>> This is code that we use daily on multiple clusters without >>>>>>>> problems, and that runs valgrind clean for small test problems. >>>>>>>> >>>>>>>> The run generates the following messages, but doesn?t crash, just >>>>>>>> seems to hang with all processes continuing to show activity: >>>>>>>> >>>>>>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>>>>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>>>>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>>>>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>>>>>> >>>>>>>> >>>>>>>> Looking at line 117 in PetscGatherMessageLengths we find the >>>>>>>> offending statement is the MPI_Isend: >>>>>>>> >>>>>>>> >>>>>>>> /* Post the Isends with the message length-info */ >>>>>>>> for (i=0,j=0; i>>>>>>> if (ilengths[i]) { >>>>>>>> ierr = >>>>>>>> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>>>>>> j++; >>>>>>>> } >>>>>>>> } >>>>>>>> >>>>>>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving >>>>>>>> the same problem. >>>>>>>> >>>>>>>> We suspect there is some limit being set on this cloud cluster on >>>>>>>> the number of file connections or something, but we don?t know. >>>>>>>> >>>>>>>> Anyone have any ideas? We are sort of grasping for straws at this >>>>>>>> point. >>>>>>>> >>>>>>>> Thanks, Randy M. >>>>>>>> >>>>>>> >>>>>> >>>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Mon Apr 27 15:11:11 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Mon, 27 Apr 2020 13:11:11 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> <4035EB5E-DA6B-4995-8710-B062C8663D41@gmail.com> Message-ID: <7DD6F748-52CF-49A5-83D5-99733CCC8DBB@gmail.com> Hi Junchao, I have no objection if you want to add the test code to the petsc repository. In the meantime, we will check out the changes you?ve made and let you know what we find. Thanks, Randy M. > On Apr 27, 2020, at 9:59 AM, Junchao Zhang wrote: > > Randy, > You are absolutely right. The AOApplicationToPetsc could not be removed. Since the excessive communication is inevitable, I made two changes in petsc to ease that. One is I skewed the communication to let each rank send to ranks greater than itself first. The other is an option, -max_pending_isend, to control number of pending isends. Current default is 512. > I have an MR at https://gitlab.com/petsc/petsc/-/merge_requests/2757 . I tested it dozens of times with your example at 5120 ranks. It worked fine. > Please try it in your environment and let me know the result. Since the failure is random, you may need to run multiple times. > > BTW, if no objection, I'd like to add your excellent example to petsc repo. > > Thanks > --Junchao Zhang > > > On Fri, Apr 24, 2020 at 5:32 PM Randall Mackie > wrote: > Hi Junchao, > > I tested by commenting out the AOApplicationToPetsc calls as you suggest, but it doesn?t work because it doesn?t maintain the proper order of the elements in the scattered vectors. > > I attach a modified version of the test code where I put elements into the global vector, then carry out the scatter, and check on the subcomms that they are correct. > > You can see everything is fine with the AOApplicationToPetsc calls, but the comparison fails when those are commented out. > > If there is some way I can achieve the right VecScatters without those calls, I would be happy to know how to do that. > > Thank you again for your help. > > Randy > > ps. I suggest you run this test with nx=ny=nz=10 and only a couple subcomms and maybe 4 processes to demonstrate the behavior > > >> On Apr 20, 2020, at 2:45 PM, Junchao Zhang > wrote: >> >> Hello, Randy, >> I further looked at the problem and believe it was due to overwhelming traffic. The code sometimes fails at MPI_Waitall. I printed out MPI error strings of bad MPI Statuses. One of them is like "MPID_nem_tcp_connpoll(1845): Communication error with rank 25: Connection reset by peer", which is a tcp error and has nothing to do with petsc. >> Further investigation shows in the case of 5120 ranks with 320 sub communicators, during VecScatterSetUp, each rank has around 640 isends/irecvs neighbors, and quite a few ranks has 1280 isends neighbors. I guess these overwhelming isends occasionally crashed the connection. >> The piece of code in VecScatterSetUp is to calculate the communication pattern. With index sets "having good locality", the calculate itself incurs less traffic. Here good locality means indices in an index set mostly point to local entries. However, the AOApplicationToPetsc() call in your code unnecessarily ruined the good petsc ordering. If we remove AOApplicationToPetsc() (the vecscatter result is still correct) , then each rank uniformly has around 320 isends/irecvs. >> So, test with this modification and see if it really works in your environment. If not applicable, we can provide options in petsc to carry out the communication in phases to avoid flooding the network (though it is better done by MPI). >> >> Thanks. >> --Junchao Zhang >> >> >> On Fri, Apr 17, 2020 at 10:47 AM Randall Mackie > wrote: >> Hi Junchao, >> >> Thank you for your efforts. >> We tried petsc-3.13.0 but it made no difference. >> We think now the issue are with sysctl parameters, and increasing those seemed to have cleared up the problem. >> This also most likely explains how different clusters had different behaviors with our test code. >> >> We are now running our code and will report back once we are sure that there are no further issues. >> >> Thanks again for your help. >> >> Randy M. >> >>> On Apr 17, 2020, at 8:09 AM, Junchao Zhang > wrote: >>> >>> >>> >>> >>> On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang > wrote: >>> Randy, >>> I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also found the error went away with petsc-3.13. However, I have not figured out what is the bug and which commit fixed it :). >>> So at your side, it is better to use the latest petsc. >>> I want to add that even with petsc-3.12.4 the error is random. I was only able to reproduce the error once, so I can not claim petsc-3.13 actually fixed it (or, the bug is really in petsc). >>> >>> --Junchao Zhang >>> >>> >>> On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang > wrote: >>> Randy, >>> Up to now I could not reproduce your error, even with the biggest mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 >>> While I continue doing test, you can try other options. It looks you want to duplicate a vector to subcomms. I don't think you need the two lines: >>> call AOApplicationToPetsc(aoParent,nis,ind1,ierr) >>> call AOApplicationToPetsc(aoSub,nis,ind2,ierr) >>> In addition, you can use simpler and more memory-efficient index sets. There is a petsc example for this task, see case 3 in https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c >>> BTW, it is good to use petsc master so we are on the same page. >>> --Junchao Zhang >>> >>> >>> On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie > wrote: >>> Hi Junchao, >>> >>> So I was able to create a small test code that duplicates the issue we have been having, and it is attached to this email in a zip file. >>> Included is the test.F90 code, the commands to duplicate crash and to duplicate a successful run, output errors, and our petsc configuration. >>> >>> Our findings to date include: >>> >>> The error is reproducible in a very short time with this script >>> It is related to nproc*nsubs and (although to a less extent) to DM grid size >>> It happens regardless of MPI implementation (mpich, intel mpi 2018, 2019, openmpi) or compiler (gfortran/gcc , intel 2018) >>> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to slightly increase the limit, but still fails on the full machine set. >>> Nothing looks interesting on valgrind >>> >>> Our initial tests were carried out on an Azure cluster, but we also tested on our smaller cluster, and we found the following: >>> >>> Works: >>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>> >>> Crashes (this works on Azure) >>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>> >>> So it looks like it may also be related to the physical number of nodes as well. >>> >>> In any case, even with 2560 processes on 192 cores the memory does not go above 3.5 Gbyes so you don?t need a huge cluster to test. >>> >>> Thanks, >>> >>> Randy M. >>> >>> >>> >>>> On Apr 14, 2020, at 12:23 PM, Junchao Zhang > wrote: >>>> >>>> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I doubted it was the problem. Even if users configure petsc with 64-bit indices, we use PetscMPIInt in MPI calls. So it is not a problem. >>>> Try -vecscatter_type mpi1 to restore to the original VecScatter implementation. If the problem still remains, could you provide a test example for me to debug? >>>> >>>> --Junchao Zhang >>>> >>>> >>>> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie > wrote: >>>> Hi Junchao, >>>> >>>> We have tried your two suggestions but the problem remains. >>>> And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce. >>>> >>>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the problem must be elsewhere and not MPI. >>>> >>>> Give that this is a 64 bit indices build of PETSc, is there some possible incompatibility between PETSc and MPI calls? >>>> >>>> We are open to any other possible suggestions to try as other than valgrind on thousands of processes we seem to have run out of ideas. >>>> >>>> Thanks, Randy M. >>>> >>>>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang > wrote: >>>>> >>>>> >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang > wrote: >>>>> Randy, >>>>> Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr >>>>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>>>> But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. >>>>> Thanks. >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: >>>>> Dear PETSc users, >>>>> >>>>> We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. >>>>> This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. >>>>> >>>>> The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: >>>>> >>>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>>> >>>>> >>>>> Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: >>>>> >>>>> >>>>> /* Post the Isends with the message length-info */ >>>>> for (i=0,j=0; i>>>> if (ilengths[i]) { >>>>> ierr = MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>>> j++; >>>>> } >>>>> } >>>>> >>>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the same problem. >>>>> >>>>> We suspect there is some limit being set on this cloud cluster on the number of file connections or something, but we don?t know. >>>>> >>>>> Anyone have any ideas? We are sort of grasping for straws at this point. >>>>> >>>>> Thanks, Randy M. >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Zane.Jakobs at colorado.edu Mon Apr 27 17:13:20 2020 From: Zane.Jakobs at colorado.edu (Zane Charles Jakobs) Date: Mon, 27 Apr 2020 15:13:20 -0700 Subject: [petsc-users] TSAdjoint "cannot locate function MatDenseGetColumn_C in object" Message-ID: Hi PETSc devs, I'm writing some code that uses Tao to do variational data assimilation, and I get the following error message upon calling TaoSolve(): [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Cannot locate function MatDenseGetColumn_C in object [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12 GIT Date: 2020-02-21 16:53:25 -0600 [0]PETSC ERROR: ./test_var on a arch-linux2-c-debug named DiffeoInvariant by diffeoinvariant Mon Apr 27 14:57:39 2020 [0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native -fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 --with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native -fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich [0]PETSC ERROR: #1 MatDenseGetColumn() line 2944 in /usr/local/petsc/src/mat/impls/dense/seq/dense.c [0]PETSC ERROR: #2 TSAdjointStep_RK() line 921 in /usr/local/petsc/src/ts/impls/explicit/rk/rk.c [0]PETSC ERROR: #3 TSAdjointStep() line 1506 in /usr/local/petsc/src/ts/interface/sensitivity/tssen.c [0]PETSC ERROR: #4 TSAdjointSolve() line 1568 in /usr/local/petsc/src/ts/interface/sensitivity/tssen.c [0]PETSC ERROR: #5 VARFormGradient() line 212 in src/var.c [0]PETSC ERROR: #6 TaoComputeObjectiveAndGradient() line 275 in /usr/local/petsc/src/tao/interface/taosolver_fg.c [0]PETSC ERROR: #7 TaoSolve_LMVM() line 23 in /usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c [0]PETSC ERROR: #8 TaoSolve() line 219 in /usr/local/petsc/src/tao/interface/taosolver.c [0]PETSC ERROR: #9 VarInfoOptimize() line 486 in src/var.c I set the TSAdjoint in question to use RK4 for timestepping, and I know that TSAdjoint and company don't necessarily implement all of the timestepping methods that the "forward" TS does, so that might be relevant. Any ideas what I might be doing incorrectly? Maybe I'm forgetting to set the type of some Mat (e.g. the Jacobian) somewhere? (Would/could that cause this error?) Thanks! -Zane Jakobs -------------- next part -------------- An HTML attachment was scrubbed... URL: From yjwu16 at gmail.com Tue Apr 28 08:21:56 2020 From: yjwu16 at gmail.com (Yingjie Wu) Date: Tue, 28 Apr 2020 21:21:56 +0800 Subject: [petsc-users] Problems about tolerances set in TS Message-ID: Dear PETSc developers Hi, I have recently used TS to solve nonlinear equations with time terms. since the convergence of my model is not very good, i would like to set to iterative fixed nonlinear steps per time step. If the problem does not meet the SNES convergence criteria after fixed number of nonlinear steps , then go to the next time step calculation. I tried -snes_max_it , but didn't achieve the effect I wanted, and the program stopped after iterating the fixed number of steps. How should I set up in the program? Thanks, Yingjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 28 08:27:56 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Apr 2020 09:27:56 -0400 Subject: [petsc-users] Problems about tolerances set in TS In-Reply-To: References: Message-ID: On Tue, Apr 28, 2020 at 9:23 AM Yingjie Wu wrote: > Dear PETSc developers > Hi, > > I have recently used TS to solve nonlinear equations with time terms. > since the convergence of my model is not very good, i would like to set to > iterative fixed nonlinear steps per time step. If the problem does not meet > the SNES convergence criteria after fixed number of nonlinear steps , then > go to the next time step calculation. I tried -snes_max_it , but didn't > achieve the effect I wanted, and the program stopped after iterating the > fixed number of steps. How should I set up in the program? > I think -ts_adapt_always_accept will accept the step after a SNES failure, but Hong would know better. Thanks, Matt > Thanks, > > Yingjie > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Tue Apr 28 09:31:13 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 28 Apr 2020 14:31:13 +0000 Subject: [petsc-users] TSAdjoint "cannot locate function MatDenseGetColumn_C in object" In-Reply-To: References: Message-ID: On Apr 27, 2020, at 5:13 PM, Zane Charles Jakobs > wrote: Hi PETSc devs, I'm writing some code that uses Tao to do variational data assimilation, and I get the following error message upon calling TaoSolve(): [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Cannot locate function MatDenseGetColumn_C in object [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.4-783-g88ddbcab12 GIT Date: 2020-02-21 16:53:25 -0600 [0]PETSC ERROR: ./test_var on a arch-linux2-c-debug named DiffeoInvariant by diffeoinvariant Mon Apr 27 14:57:39 2020 [0]PETSC ERROR: Configure options CFLAGS="-O3 -march=native -mtune=native -fPIE" --with-shared-libraries=1 --with-openmp=1 --with-threads=1 --with-fortran=0 --with-avx2=1 CXXOPTFLAGS="-O3 -march=native -mtune=native -fPIE" --with-cc=clang --with-cxx=clang++ --download-mpich [0]PETSC ERROR: #1 MatDenseGetColumn() line 2944 in /usr/local/petsc/src/mat/impls/dense/seq/dense.c [0]PETSC ERROR: #2 TSAdjointStep_RK() line 921 in /usr/local/petsc/src/ts/impls/explicit/rk/rk.c [0]PETSC ERROR: #3 TSAdjointStep() line 1506 in /usr/local/petsc/src/ts/interface/sensitivity/tssen.c [0]PETSC ERROR: #4 TSAdjointSolve() line 1568 in /usr/local/petsc/src/ts/interface/sensitivity/tssen.c [0]PETSC ERROR: #5 VARFormGradient() line 212 in src/var.c [0]PETSC ERROR: #6 TaoComputeObjectiveAndGradient() line 275 in /usr/local/petsc/src/tao/interface/taosolver_fg.c [0]PETSC ERROR: #7 TaoSolve_LMVM() line 23 in /usr/local/petsc/src/tao/unconstrained/impls/lmvm/lmvm.c [0]PETSC ERROR: #8 TaoSolve() line 219 in /usr/local/petsc/src/tao/interface/taosolver.c [0]PETSC ERROR: #9 VarInfoOptimize() line 486 in src/var.c I set the TSAdjoint in question to use RK4 for timestepping, and I know that TSAdjoint and company don't necessarily implement all of the timestepping methods that the "forward" TS does, so that might be relevant. Any ideas what I might be doing incorrectly? Maybe I'm forgetting to set the type of some Mat (e.g. the Jacobian) somewhere? (Would/could that cause this error?) The Jacobian for the quadrature term should be a dense matrix. See src/ts/tutorials/power_grid/ex9opt.c for a quick example. Hong (Mr.) Thanks! -Zane Jakobs -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Tue Apr 28 09:51:48 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 28 Apr 2020 14:51:48 +0000 Subject: [petsc-users] Problems about tolerances set in TS In-Reply-To: References: Message-ID: -ts_error_if_step_fails 0 You might want to find out why the nonlinear solver does not converge first. If you have a hand-written Jacobian, you can validate it with -snes_test_jacobian 1 (for a small test case). Hong (Mr.) > On Apr 28, 2020, at 8:21 AM, Yingjie Wu wrote: > > Dear PETSc developers > Hi, > > I have recently used TS to solve nonlinear equations with time terms. since the convergence of my model is not very good, i would like to set to iterative fixed nonlinear steps per time step. If the problem does not meet the SNES convergence criteria after fixed number of nonlinear steps , then go to the next time step calculation. I tried -snes_max_it , but didn't achieve the effect I wanted, and the program stopped after iterating the fixed number of steps. How should I set up in the program? > > Thanks, > Yingjie From yjwu16 at gmail.com Tue Apr 28 23:00:54 2020 From: yjwu16 at gmail.com (Yingjie Wu) Date: Wed, 29 Apr 2020 12:00:54 +0800 Subject: [petsc-users] Problems about tolerances set in TS In-Reply-To: References: Message-ID: Thank you very much for your reply. I tried both switches, but unfortunately they didn't seem to meet my needs. -ts_adapt_always_accept The switch doesn't seem to work, reporting errors when the maximum number of steps is reached without convergence, then the program exits. -ts_error_if_step_fails 0 This switch accepts the non-convergence time step and outputs the result, but does not continue into the next time step calculation ( The time step hasn't reached the maximum time step I set). And I wonder if the variable behind this switch is optional? What does it mean? I hope to achieve in the case of non-convergence Newton step( for example, the maximum number of Newton iteration steps reached -snes_max_it 50), can go in the next time step calculation. Thanks, Yingjie Zhang, Hong ?2020?4?28??? ??10:51??? > -ts_error_if_step_fails 0 > > You might want to find out why the nonlinear solver does not converge > first. If you have a hand-written Jacobian, you can validate it with > -snes_test_jacobian 1 (for a small test case). > > Hong (Mr.) > > > On Apr 28, 2020, at 8:21 AM, Yingjie Wu wrote: > > > > Dear PETSc developers > > Hi, > > > > I have recently used TS to solve nonlinear equations with time terms. > since the convergence of my model is not very good, i would like to set to > iterative fixed nonlinear steps per time step. If the problem does not meet > the SNES convergence criteria after fixed number of nonlinear steps , then > go to the next time step calculation. I tried -snes_max_it , but didn't > achieve the effect I wanted, and the program stopped after iterating the > fixed number of steps. How should I set up in the program? > > > > Thanks, > > Yingjie > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eda.oktay at metu.edu.tr Wed Apr 29 05:47:32 2020 From: eda.oktay at metu.edu.tr (Eda Oktay) Date: Wed, 29 Apr 2020 13:47:32 +0300 Subject: [petsc-users] Gather and Broadcast Parallel Vectors in k-means algorithm In-Reply-To: <0c10fc0a-3d86-e91a-f349-fd7c087ba8ed@anl.gov> References: <0c10fc0a-3d86-e91a-f349-fd7c087ba8ed@anl.gov> Message-ID: Dear Richard, I am trying to use spectral clustering algorithm by using k-means clustering algorithm at some point. I am doing this by producing a matrix consisting of eigenvectors (of the adjacency matrix of the graph that I want to partition), then forming row vectors of this matrix. This is the part that I am using parallel vector. By using the output from k-means, I am trying to cluster these row vectors. To cluster these vectors, I think I need all row vectors in all processes. I wanted to use sequential vectors, however, I couldn't find a different way that I form row vectors of a matrix. I am trying to use VecScatterCreateToAll, however, since my vector is parallel crated by VecDuplicateVecs, my input is not in correct type, so I get error. I still can't get how can I use this function in parallel vector created by VecDuplicateVecs. Thank you all for your help. Eda Mills, Richard Tran , 7 Nis 2020 Sal, 01:51 tarihinde ?unu yazd?: > Hi Eda, > > I think that you probably want to use VecScatter routines, as Junchao > has suggested, instead of the lower level star forest for this. I > believe that VecScatterCreateToZero() is what you want for the broadcast > problem you describe, in the second part of your question. I'm not sure > what you are trying to do in the first part. Taking a parallel vector > and then copying its entire contents to a sequential vector residing on > each process is not scalable, and a lot of the design that has gone into > PETSc is to prevent the user from ever needing to do things like that. > Can you please tell us what you intend to do with these sequential vectors? > > I'm also wondering why, later in your message, you say that you get > cluster assignments from Matlab, and then "to cluster row vectors > according to this information, all processors need to have all of the > row vectors". Do you mean you want to get all of the row vectors copied > onto all of the processors so that you can compute the cluster > centroids? If so, computing the cluster centroids can be done without > copying the row vectors onto all processors if you use a communication > operation like MPI_Allreduce(). > > Lastly, let me add that I've done a fair amount of work implementing > clustering algorithms on distributed memory parallel machines, but > outside of PETSc. I was thinking that I should implement some of these > routines using PETSc. I can't get to this immediately, but I'm wondering > if you might care to tell me a bit more about the clustering problems > you need to solve and how having some support for this in PETSc might > (or might not) help. > > Best regards, > Richard > > On 4/4/20 1:39 AM, Eda Oktay wrote: > > Hi all, > > > > I created a parallel vector UV, by using VecDuplicateVecs since I need > > row vectors of a matrix. However, I need the whole vector be in all > > processors, which means I need to gather all and broadcast them to all > > processors. To gather, I tried to use VecStrideGatherAll: > > > > Vec UVG; > > VecStrideGatherAll(UV,UVG,INSERT_VALUES); > > VecView(UVG,PETSC_VIEWER_STDOUT_WORLD); > > > > however when I try to view the vector, I get the following error. > > > > [3]PETSC ERROR: Invalid argument > > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > > [3]PETSC ERROR: See > > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting. > > [3]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > > [3]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > > 11:22:54 2020 > > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > > [0]PETSC ERROR: See > > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting. > > [0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > > [0]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > > 11:22:54 2020 > > [0]PETSC ERROR: Configure options --download-mpich --download-openblas > > --download-slepc --download-metis --download-parmetis --download-chaco > > --with-X=1 > > [0]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > > ./clustering_son_final_edgecut_without_parmetis on a > > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > > 11:22:54 2020 > > [1]PETSC ERROR: Configure options --download-mpich --download-openblas > > --download-slepc --download-metis --download-parmetis --download-chaco > > --with-X=1 > > [1]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > > Configure options --download-mpich --download-openblas > > --download-slepc --download-metis --download-parmetis --download-chaco > > --with-X=1 > > [3]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > > > > I couldn't understand why I am getting this error. Is this because of > > UV being created by VecDuplicateVecs? How can I solve this problem? > > > > The other question is broadcasting. After gathering all elements of > > the vector UV, I need to broadcast them to all processors. I found > > PetscSFBcastBegin. However, I couldn't understand the PetscSF concept > > properly. I couldn't adjust my question to the star forest concept. > > > > My problem is: If I have 4 processors, I create a matrix whose columns > > are 4 smallest eigenvectors, say of size 72. Then by defining each row > > of this matrix as a vector, I cluster them by using k-means > > clustering algorithm. For now, I cluster them by using MATLAB and I > > obtain a vector showing which row vector is in which cluster. After > > getting this vector, to cluster row vectors according to this > > information, all processors need to have all of the row vectors. > > > > According to this problem, how can I use the star forest concept? > > > > I will be glad if you can help me about this problem since I don't > > have enough knowledge about graph theory. An if you have any idea > > about how can I use k-means algorithm in a more practical way, please > > let me know. > > > > Thanks! > > > > Eda > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Wed Apr 29 10:26:38 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Wed, 29 Apr 2020 15:26:38 +0000 Subject: [petsc-users] Problems about tolerances set in TS In-Reply-To: References: Message-ID: <34442115-CFFE-4FFD-AFB5-05F24B7E546C@anl.gov> Please send the list of your command line options and the screen output with -ts_monitor. Hong (Mr.) On Apr 28, 2020, at 11:00 PM, Yingjie Wu > wrote: Thank you very much for your reply. I tried both switches, but unfortunately they didn't seem to meet my needs. -ts_adapt_always_accept The switch doesn't seem to work, reporting errors when the maximum number of steps is reached without convergence, then the program exits. -ts_error_if_step_fails 0 This switch accepts the non-convergence time step and outputs the result, but does not continue into the next time step calculation ( The time step hasn't reached the maximum time step I set). And I wonder if the variable behind this switch is optional? What does it mean? I hope to achieve in the case of non-convergence Newton step( for example, the maximum number of Newton iteration steps reached -snes_max_it 50), can go in the next time step calculation. Thanks, Yingjie Zhang, Hong > ?2020?4?28??? ??10:51??? -ts_error_if_step_fails 0 You might want to find out why the nonlinear solver does not converge first. If you have a hand-written Jacobian, you can validate it with -snes_test_jacobian 1 (for a small test case). Hong (Mr.) > On Apr 28, 2020, at 8:21 AM, Yingjie Wu > wrote: > > Dear PETSc developers > Hi, > > I have recently used TS to solve nonlinear equations with time terms. since the convergence of my model is not very good, i would like to set to iterative fixed nonlinear steps per time step. If the problem does not meet the SNES convergence criteria after fixed number of nonlinear steps , then go to the next time step calculation. I tried -snes_max_it , but didn't achieve the effect I wanted, and the program stopped after iterating the fixed number of steps. How should I set up in the program? > > Thanks, > Yingjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Apr 29 14:51:39 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 29 Apr 2020 14:51:39 -0500 Subject: [petsc-users] Gather and Broadcast Parallel Vectors in k-means algorithm In-Reply-To: References: <0c10fc0a-3d86-e91a-f349-fd7c087ba8ed@anl.gov> Message-ID: Eda, You are trying to duplicate a group of MPI vectors to every process. Am I correct? --Junchao Zhang On Wed, Apr 29, 2020 at 5:48 AM Eda Oktay wrote: > Dear Richard, > > I am trying to use spectral clustering algorithm by using k-means > clustering algorithm at some point. I am doing this by producing a matrix > consisting of eigenvectors (of the adjacency matrix of the graph that I > want to partition), then forming row vectors of this matrix. This is the > part that I am using parallel vector. By using the output from k-means, I > am trying to cluster these row vectors. To cluster these vectors, I think I > need all row vectors in all processes. I wanted to use sequential vectors, > however, I couldn't find a different way that I form row vectors of a > matrix. > > I am trying to use VecScatterCreateToAll, however, since my vector is > parallel crated by VecDuplicateVecs, my input is not in correct type, so I > get error. I still can't get how can I use this function in parallel vector > created by VecDuplicateVecs. > > Thank you all for your help. > > Eda > > Mills, Richard Tran , 7 Nis 2020 Sal, 01:51 tarihinde > ?unu yazd?: > >> Hi Eda, >> >> I think that you probably want to use VecScatter routines, as Junchao >> has suggested, instead of the lower level star forest for this. I >> believe that VecScatterCreateToZero() is what you want for the broadcast >> problem you describe, in the second part of your question. I'm not sure >> what you are trying to do in the first part. Taking a parallel vector >> and then copying its entire contents to a sequential vector residing on >> each process is not scalable, and a lot of the design that has gone into >> PETSc is to prevent the user from ever needing to do things like that. >> Can you please tell us what you intend to do with these sequential >> vectors? >> >> I'm also wondering why, later in your message, you say that you get >> cluster assignments from Matlab, and then "to cluster row vectors >> according to this information, all processors need to have all of the >> row vectors". Do you mean you want to get all of the row vectors copied >> onto all of the processors so that you can compute the cluster >> centroids? If so, computing the cluster centroids can be done without >> copying the row vectors onto all processors if you use a communication >> operation like MPI_Allreduce(). >> >> Lastly, let me add that I've done a fair amount of work implementing >> clustering algorithms on distributed memory parallel machines, but >> outside of PETSc. I was thinking that I should implement some of these >> routines using PETSc. I can't get to this immediately, but I'm wondering >> if you might care to tell me a bit more about the clustering problems >> you need to solve and how having some support for this in PETSc might >> (or might not) help. >> >> Best regards, >> Richard >> >> On 4/4/20 1:39 AM, Eda Oktay wrote: >> > Hi all, >> > >> > I created a parallel vector UV, by using VecDuplicateVecs since I need >> > row vectors of a matrix. However, I need the whole vector be in all >> > processors, which means I need to gather all and broadcast them to all >> > processors. To gather, I tried to use VecStrideGatherAll: >> > >> > Vec UVG; >> > VecStrideGatherAll(UV,UVG,INSERT_VALUES); >> > VecView(UVG,PETSC_VIEWER_STDOUT_WORLD); >> > >> > however when I try to view the vector, I get the following error. >> > >> > [3]PETSC ERROR: Invalid argument >> > [3]PETSC ERROR: Wrong type of object: Parameter # 1 >> > [3]PETSC ERROR: See >> > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >> shooting. >> > [3]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 >> > [3]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a >> > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 >> > 11:22:54 2020 >> > [3]PETSC ERROR: Wrong type of object: Parameter # 1 >> > [0]PETSC ERROR: See >> > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >> shooting. >> > [0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 >> > [0]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a >> > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 >> > 11:22:54 2020 >> > [0]PETSC ERROR: Configure options --download-mpich --download-openblas >> > --download-slepc --download-metis --download-parmetis --download-chaco >> > --with-X=1 >> > [0]PETSC ERROR: #1 VecStrideGatherAll() line 646 in >> > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c >> > ./clustering_son_final_edgecut_without_parmetis on a >> > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 >> > 11:22:54 2020 >> > [1]PETSC ERROR: Configure options --download-mpich --download-openblas >> > --download-slepc --download-metis --download-parmetis --download-chaco >> > --with-X=1 >> > [1]PETSC ERROR: #1 VecStrideGatherAll() line 646 in >> > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c >> > Configure options --download-mpich --download-openblas >> > --download-slepc --download-metis --download-parmetis --download-chaco >> > --with-X=1 >> > [3]PETSC ERROR: #1 VecStrideGatherAll() line 646 in >> > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c >> > >> > I couldn't understand why I am getting this error. Is this because of >> > UV being created by VecDuplicateVecs? How can I solve this problem? >> > >> > The other question is broadcasting. After gathering all elements of >> > the vector UV, I need to broadcast them to all processors. I found >> > PetscSFBcastBegin. However, I couldn't understand the PetscSF concept >> > properly. I couldn't adjust my question to the star forest concept. >> > >> > My problem is: If I have 4 processors, I create a matrix whose columns >> > are 4 smallest eigenvectors, say of size 72. Then by defining each row >> > of this matrix as a vector, I cluster them by using k-means >> > clustering algorithm. For now, I cluster them by using MATLAB and I >> > obtain a vector showing which row vector is in which cluster. After >> > getting this vector, to cluster row vectors according to this >> > information, all processors need to have all of the row vectors. >> > >> > According to this problem, how can I use the star forest concept? >> > >> > I will be glad if you can help me about this problem since I don't >> > have enough knowledge about graph theory. An if you have any idea >> > about how can I use k-means algorithm in a more practical way, please >> > let me know. >> > >> > Thanks! >> > >> > Eda >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtmills at anl.gov Wed Apr 29 18:07:35 2020 From: rtmills at anl.gov (Mills, Richard Tran) Date: Wed, 29 Apr 2020 23:07:35 +0000 Subject: [petsc-users] Gather and Broadcast Parallel Vectors in k-means algorithm In-Reply-To: References: <0c10fc0a-3d86-e91a-f349-fd7c087ba8ed@anl.gov> Message-ID: <738e615c-de8e-88cd-81cd-4d6f35b454d2@anl.gov> Hi Eda, Thanks for your reply. I'm still trying to understand why you say you need to duplicate the row vectors across all processes. When I have implemented parallel k-means, I don't duplicate the row vectors. (This would be very unscalable and largely defeat the point of doing this with MPI parallelism in the first place.) Earlier in this email thread, you said that you have used Matlab to get cluster IDs for each row vector. Are you trying to then use this information to calculate the cluster centroids from inside your PETSc program? If so, you can do this by having each MPI rank do the following: For cluster i in 0 to (k-1), calculate the element-wise sum of all of the local rows that belong to cluster i, then use MPI_Allreduce() to calculate the global elementwise sum of all the local sums (this array will be replicated across all MPI ranks), and finally divide by the number of members of that cluster to get the centroid. Note that MPI_Allreduce() doesn't work on PETSc objects, but simple arrays, so you'll want to use something like MatGetValues() or MatGetRow() to access the elements of your row vectors. Let me know if I am misunderstanding what you are aiming to do, or if I am misunderstanding something. It sounds like you would benefit from having some routines in PETSc to do k-means (or other) clustering, by the way? Best regards, Richard On 4/29/20 3:47 AM, Eda Oktay wrote: Dear Richard, I am trying to use spectral clustering algorithm by using k-means clustering algorithm at some point. I am doing this by producing a matrix consisting of eigenvectors (of the adjacency matrix of the graph that I want to partition), then forming row vectors of this matrix. This is the part that I am using parallel vector. By using the output from k-means, I am trying to cluster these row vectors. To cluster these vectors, I think I need all row vectors in all processes. I wanted to use sequential vectors, however, I couldn't find a different way that I form row vectors of a matrix. I am trying to use VecScatterCreateToAll, however, since my vector is parallel crated by VecDuplicateVecs, my input is not in correct type, so I get error. I still can't get how can I use this function in parallel vector created by VecDuplicateVecs. Thank you all for your help. Eda Mills, Richard Tran >, 7 Nis 2020 Sal, 01:51 tarihinde ?unu yazd?: Hi Eda, I think that you probably want to use VecScatter routines, as Junchao has suggested, instead of the lower level star forest for this. I believe that VecScatterCreateToZero() is what you want for the broadcast problem you describe, in the second part of your question. I'm not sure what you are trying to do in the first part. Taking a parallel vector and then copying its entire contents to a sequential vector residing on each process is not scalable, and a lot of the design that has gone into PETSc is to prevent the user from ever needing to do things like that. Can you please tell us what you intend to do with these sequential vectors? I'm also wondering why, later in your message, you say that you get cluster assignments from Matlab, and then "to cluster row vectors according to this information, all processors need to have all of the row vectors". Do you mean you want to get all of the row vectors copied onto all of the processors so that you can compute the cluster centroids? If so, computing the cluster centroids can be done without copying the row vectors onto all processors if you use a communication operation like MPI_Allreduce(). Lastly, let me add that I've done a fair amount of work implementing clustering algorithms on distributed memory parallel machines, but outside of PETSc. I was thinking that I should implement some of these routines using PETSc. I can't get to this immediately, but I'm wondering if you might care to tell me a bit more about the clustering problems you need to solve and how having some support for this in PETSc might (or might not) help. Best regards, Richard On 4/4/20 1:39 AM, Eda Oktay wrote: > Hi all, > > I created a parallel vector UV, by using VecDuplicateVecs since I need > row vectors of a matrix. However, I need the whole vector be in all > processors, which means I need to gather all and broadcast them to all > processors. To gather, I tried to use VecStrideGatherAll: > > Vec UVG; > VecStrideGatherAll(UV,UVG,INSERT_VALUES); > VecView(UVG,PETSC_VIEWER_STDOUT_WORLD); > > however when I try to view the vector, I get the following error. > > [3]PETSC ERROR: Invalid argument > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > [3]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [3]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > [3]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > 11:22:54 2020 > [3]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.11.1, Apr, 12, 2019 > [0]PETSC ERROR: ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > 11:22:54 2020 > [0]PETSC ERROR: Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [0]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > ./clustering_son_final_edgecut_without_parmetis on a > arch-linux2-c-debug named localhost.localdomain by edaoktay Sat Apr 4 > 11:22:54 2020 > [1]PETSC ERROR: Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [1]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > Configure options --download-mpich --download-openblas > --download-slepc --download-metis --download-parmetis --download-chaco > --with-X=1 > [3]PETSC ERROR: #1 VecStrideGatherAll() line 646 in > /home/edaoktay/petsc-3.11.1/src/vec/vec/utils/vinv.c > > I couldn't understand why I am getting this error. Is this because of > UV being created by VecDuplicateVecs? How can I solve this problem? > > The other question is broadcasting. After gathering all elements of > the vector UV, I need to broadcast them to all processors. I found > PetscSFBcastBegin. However, I couldn't understand the PetscSF concept > properly. I couldn't adjust my question to the star forest concept. > > My problem is: If I have 4 processors, I create a matrix whose columns > are 4 smallest eigenvectors, say of size 72. Then by defining each row > of this matrix as a vector, I cluster them by using k-means > clustering algorithm. For now, I cluster them by using MATLAB and I > obtain a vector showing which row vector is in which cluster. After > getting this vector, to cluster row vectors according to this > information, all processors need to have all of the row vectors. > > According to this problem, how can I use the star forest concept? > > I will be glad if you can help me about this problem since I don't > have enough knowledge about graph theory. An if you have any idea > about how can I use k-means algorithm in a more practical way, please > let me know. > > Thanks! > > Eda -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Thu Apr 30 00:22:42 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Thu, 30 Apr 2020 05:22:42 +0000 Subject: [petsc-users] Problems about tolerances set in TS In-Reply-To: References: <34442115-CFFE-4FFD-AFB5-05F24B7E546C@anl.gov> Message-ID: <2A440035-8BD5-4533-95D3-CC4DDB6DCA97@anl.gov> Please do not drop the mailing list when replying. It looks like the max steps or the final time has been set to zero for TS. You might want to check your code to see if the TS settings are correct or if they are overwritten in some callback functions by mistake. Hong (Mr.) > On Apr 29, 2020, at 10:16 PM, Yingjie Wu wrote: > > Command: mpiexec -n 1 ./SGts -snes_fd -pc_type lu -ts_error_if_step_fails 0 -ts_monitor -snes_monitor \ > -ksp_monitor \ > -ksp_converged_reason \ > -snes_converged_reason \ > -snes_rtol 1.e-2 \ > -ksp_rtol 1.e-5 \ > -snes_max_it 2 \ > -snes_view > Output: Timestep 0: > CurrentTime 0.: > 0 TS dt 1. time 0. > iter = 0, SNES Function norm 245.204 > 0 SNES Function norm 2.452043863308e+02 > 0 KSP Residual norm 1.641317090793e+04 > 1 KSP Residual norm 2.717662845608e-10 > Linear solve converged due to CONVERGED_RTOL iterations 1 > iter = 1, SNES Function norm 170.58 > 1 SNES Function norm 1.705802879797e+02 > 0 KSP Residual norm 1.146216600447e+04 > 1 KSP Residual norm 3.272076151518e-11 > Linear solve converged due to CONVERGED_RTOL iterations 1 > iter = 2, SNES Function norm 140.625 > 2 SNES Function norm 1.406249065994e+02 > Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 2 > SNES Object: 1 MPI processes > type: newtonls maximum iterations=2, maximum function evaluations=-1530494976 > tolerances: relative=0.01, absolute=1e-50, solution=1e-08 > total number of linear solver iterations=2 > total number of function evaluations=1063 > norm schedule ALWAYS > Jacobian is built using finite differences one column at a time > SNESLineSearch Object: 1 MPI processes > type: bt > interpolation: cubic > alpha=1.000000e-04 > maxstep=1.000000e+08, minlambda=1.000000e-12 > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 > maximum iterations=40 > KSP Object: 1 MPI processes > type: gmres > restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: lu > out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 5., needed 2.84376 > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij rows=528, cols=528 > package used to perform factorization: petsc > total: nonzeros=7863, allocated nonzeros=7863 > total number of mallocs used during MatSetValues calls=0 > not using I-node routines > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij rows=528, cols=528 > total: nonzeros=2765, allocated nonzeros=5280 > total number of mallocs used during MatSetValues calls=0 > not using I-node routines > Timestep 1: > CurrentTime 0.: > 1 TS dt 1. time 0. > > Number of timesteps = 1 final time 0.00e+00 > > After the first time step, the program directly ends the TS solution process. I've set up 50 time steps, so the program is not running properly. > Although the program accepted that first time step SNES did not achieve convergence, and there is no error information, but did not continue to go to the next time step calculation, chose to end the program. I hope that when the SNES not converge at a certain time step and reaches the set maximum number SNES iteration steps, it can go to the next time step calculation. > I am looking forward to your suggestion. > > Thanks, > Yingjie > > Zhang, Hong ?2020?4?29??? ??11:26??? > Please send the list of your command line options and the screen output with -ts_monitor. > > Hong (Mr.) > >> On Apr 28, 2020, at 11:00 PM, Yingjie Wu wrote: >> >> Thank you very much for your reply. >> I tried both switches, but unfortunately they didn't seem to meet my needs. >> -ts_adapt_always_accept >> The switch doesn't seem to work, reporting errors >> when the maximum number of steps is reached without convergence, then the program exits. >> >> -ts_error_if_step_fails 0 This switch accepts the non-convergence time step and outputs the result, but does not continue into the next time step calculation ( >> >> The time step hasn't reached the maximum time step I set >> ). >> >> And I wonder if the variable behind this switch is optional? What does it mean? >> I hope to achieve in the case of non-convergence Newton step( for example, the maximum number of Newton iteration steps reached -snes_max_it 50), can go in the next time step calculation. >> >> Thanks, >> Yingjie >> >> Zhang, Hong ?2020?4?28??? ??10:51??? >> -ts_error_if_step_fails 0 >> >> You might want to find out why the nonlinear solver does not converge first. If you have a hand-written Jacobian, you can validate it with -snes_test_jacobian 1 (for a small test case). >> >> Hong (Mr.) >> >> > On Apr 28, 2020, at 8:21 AM, Yingjie Wu wrote: >> > >> > Dear PETSc developers >> > Hi, >> > >> > I have recently used TS to solve nonlinear equations with time terms. since the convergence of my model is not very good, i would like to set to iterative fixed nonlinear steps per time step. If the problem does not meet the SNES convergence criteria after fixed number of nonlinear steps , then go to the next time step calculation. I tried -snes_max_it , but didn't achieve the effect I wanted, and the program stopped after iterating the fixed number of steps. How should I set up in the program? >> > >> > Thanks, >> > Yingjie >> > From yjwu16 at gmail.com Thu Apr 30 06:22:54 2020 From: yjwu16 at gmail.com (Yingjie Wu) Date: Thu, 30 Apr 2020 19:22:54 +0800 Subject: [petsc-users] Problems about tolerances set in TS In-Reply-To: <2A440035-8BD5-4533-95D3-CC4DDB6DCA97@anl.gov> References: <34442115-CFFE-4FFD-AFB5-05F24B7E546C@anl.gov> <2A440035-8BD5-4533-95D3-CC4DDB6DCA97@anl.gov> Message-ID: I'm sorry for dropping the mailing list in previous mail. I went over the code and made sure I didn't set the maximum number or final time to zero for TS. And I found a very similar example in petsc/src/ts/tutorials/ex8.c. I made the following changes: 1. TSSetMaxStepRejections (ts,10); -> TSSetMaxStepRejections (ts,*0*); 2. TSSetMaxSNESFailures (ts,-1); -> TSSetMaxSNESFailures (ts,*0*); These changes adjust the Rejections and SNESFailures to the default state. Then test the following commands: mpiexec -n 1 ./ex8 -ts_atol 1e-2 -ts_rtol 1e-2 -ts_max_time 15 -ts_type arkimex -ts_arkimex_type 2e -problem_type orego -ts_arkimex_initial_guess_extrapolate 0 -ts_adapt_time_step_increase_delay 4 -ts_monitor -snes_monitor -snes_converged_reason The output on the screen is: (......) 7 TS dt 6.86235 time 3.47215 0 SNES Function norm 2.000433909025e-01 1 SNES Function norm 1.991123862317e-01 2 SNES Function norm 1.975627926780e-01 3 SNES Function norm 1.946057832068e-01 4 SNES Function norm 1.900556025613e-01 5 SNES Function norm 1.840284735193e-01 6 SNES Function norm 1.767727556549e-01 7 SNES Function norm 1.685779659858e-01 8 SNES Function norm 1.597281513554e-01 9 SNES Function norm 1.504796576996e-01 10 SNES Function norm 1.410521859210e-01 11 SNES Function norm 1.316269768177e-01 12 SNES Function norm 1.223486899848e-01 13 SNES Function norm 1.127512649756e-01 14 SNES Function norm 1.020567711262e-01 15 SNES Function norm 8.979527463515e-02 16 SNES Function norm 7.508859688939e-02 17 SNES Function norm 7.355450203066e-02 18 SNES Function norm 2.927834451754e-06 19 SNES Function norm 4.107891676938e-15 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 19 0 SNES Function norm 1.349799651143e-01 1 SNES Function norm 1.333235099246e-01 2 SNES Function norm 1.314928290684e-01 3 SNES Function norm 1.295004620848e-01 4 SNES Function norm 1.273561109747e-01 5 SNES Function norm 1.250670006052e-01 6 SNES Function norm 1.226381484156e-01 7 SNES Function norm 1.200604281861e-01 8 SNES Function norm 1.173313259697e-01 9 SNES Function norm 1.144768440013e-01 10 SNES Function norm 1.141593852677e-01 11 SNES Function norm 1.121207287132e-01 12 SNES Function norm 1.087640960519e-01 13 SNES Function norm 1.044397486143e-01 14 SNES Function norm 9.944173247962e-02 15 SNES Function norm 9.401107464556e-02 16 SNES Function norm 8.834148211449e-02 17 SNES Function norm 8.258574767211e-02 18 SNES Function norm 7.686203062639e-02 19 SNES Function norm 7.076716424793e-02 20 SNES Function norm 6.389435437215e-02 21 SNES Function norm 5.587963465672e-02 22 SNES Function norm 4.597296586383e-02 23 SNES Function norm 3.649153890859e-02 24 SNES Function norm 1.112024137036e-06 25 SNES Function norm 1.368410750160e-14 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 25 8 TS dt 7.74608 time 10.3345 0 SNES Function norm 2.851716167826e-01 1 SNES Function norm 2.827877334190e-01 2 SNES Function norm 2.803985313138e-01 3 SNES Function norm 2.780035928632e-01 4 SNES Function norm 2.756024874780e-01 5 SNES Function norm 2.731947705882e-01 6 SNES Function norm 2.707799825757e-01 7 SNES Function norm 2.683576476261e-01 8 SNES Function norm 2.659272724924e-01 9 SNES Function norm 2.634883451605e-01 10 SNES Function norm 2.610403334061e-01 11 SNES Function norm 2.585826832306e-01 12 SNES Function norm 2.561148171641e-01 13 SNES Function norm 2.536361324179e-01 14 SNES Function norm 2.511459988719e-01 15 SNES Function norm 2.486437568755e-01 16 SNES Function norm 2.461287148411e-01 17 SNES Function norm 2.461031613348e-01 18 SNES Function norm 2.455705375830e-01 19 SNES Function norm 2.445878823765e-01 20 SNES Function norm 2.432027776816e-01 21 SNES Function norm 2.414549711706e-01 22 SNES Function norm 2.393776915919e-01 23 SNES Function norm 2.369987079777e-01 24 SNES Function norm 2.343411787922e-01 25 SNES Function norm 2.314243298882e-01 26 SNES Function norm 2.282639927909e-01 27 SNES Function norm 2.248730282013e-01 28 SNES Function norm 2.212616539034e-01 29 SNES Function norm 2.174376914167e-01 30 SNES Function norm 2.134067415679e-01 31 SNES Function norm 2.091722954520e-01 32 SNES Function norm 2.047263028819e-01 33 SNES Function norm 2.000468148325e-01 34 SNES Function norm 1.951751560520e-01 35 SNES Function norm 1.946187817680e-01 36 SNES Function norm 1.912149142781e-01 37 SNES Function norm 1.856377137631e-01 38 SNES Function norm 1.784574187852e-01 39 SNES Function norm 1.701485679136e-01 40 SNES Function norm 1.611004146461e-01 41 SNES Function norm 1.516277627394e-01 42 SNES Function norm 1.419813980321e-01 43 SNES Function norm 1.323577340278e-01 44 SNES Function norm 1.224961331340e-01 45 SNES Function norm 1.115622228182e-01 46 SNES Function norm 9.910072225366e-02 47 SNES Function norm 8.425984823824e-02 48 SNES Function norm 6.475245098809e-02 49 SNES Function norm 2.817391391875e-02 50 SNES Function norm 2.187931938195e-06 Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 50 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: [0]PETSC ERROR: TSStep has failed due to DIVERGED_STEP_REJECTED [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.12.4, unknown [0]PETSC ERROR: ./ex8 on a arch-linux2-c-debug named ubuntu103 by wuyj Thu Apr 30 10:40:07 2020 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack [0]PETSC ERROR: #1 TSStep() line 3596 in /home/wuyj/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #2 TSSolve() line 3768 in /home/wuyj/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #3 main() line 416 in ex8.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -problem_type orego [0]PETSC ERROR: -snes_converged_reason [0]PETSC ERROR: -snes_monitor [0]PETSC ERROR: -ts_adapt_time_step_increase_delay 4 [0]PETSC ERROR: -ts_arkimex_initial_guess_extrapolate 0 [0]PETSC ERROR: -ts_arkimex_type 2e [0]PETSC ERROR: -ts_atol 1e-2 [0]PETSC ERROR: -ts_max_time 15 [0]PETSC ERROR: -ts_monitor [0]PETSC ERROR: -ts_rtol 1e-2 [0]PETSC ERROR: -ts_type arkimex [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 91) - process 0 At the eighth time step, the maximum iterations number of SNES resulting in the failure of the time step calculation, the program reported errors. Then I tested the following commands: mpiexec -n 1 ./ex8 -ts_atol 1e-2 -ts_rtol 1e-2 -ts_max_time 15 -ts_type arkimex -ts_arkimex_type 2e -problem_type orego -ts_arkimex_initial_guess_extrapolate 0 -ts_adapt_time_step_increase_delay 4 -snes_monitor -snes_converged_reason *-ts_monitor -ts_error_if_step_fails 0 * The output in screen: ?......? 7 TS dt 6.86235 time 3.47215 0 SNES Function norm 2.000433909025e-01 1 SNES Function norm 1.991123862317e-01 2 SNES Function norm 1.975627926780e-01 3 SNES Function norm 1.946057832068e-01 4 SNES Function norm 1.900556025613e-01 5 SNES Function norm 1.840284735193e-01 6 SNES Function norm 1.767727556549e-01 7 SNES Function norm 1.685779659858e-01 8 SNES Function norm 1.597281513554e-01 9 SNES Function norm 1.504796576996e-01 10 SNES Function norm 1.410521859210e-01 11 SNES Function norm 1.316269768177e-01 12 SNES Function norm 1.223486899848e-01 13 SNES Function norm 1.127512649756e-01 14 SNES Function norm 1.020567711262e-01 15 SNES Function norm 8.979527463515e-02 16 SNES Function norm 7.508859688939e-02 17 SNES Function norm 7.355450203066e-02 18 SNES Function norm 2.927834451754e-06 19 SNES Function norm 4.107891676938e-15 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 19 0 SNES Function norm 1.349799651143e-01 1 SNES Function norm 1.333235099246e-01 2 SNES Function norm 1.314928290684e-01 3 SNES Function norm 1.295004620848e-01 4 SNES Function norm 1.273561109747e-01 5 SNES Function norm 1.250670006052e-01 6 SNES Function norm 1.226381484156e-01 7 SNES Function norm 1.200604281861e-01 8 SNES Function norm 1.173313259697e-01 9 SNES Function norm 1.144768440013e-01 10 SNES Function norm 1.141593852677e-01 11 SNES Function norm 1.121207287132e-01 12 SNES Function norm 1.087640960519e-01 13 SNES Function norm 1.044397486143e-01 14 SNES Function norm 9.944173247962e-02 15 SNES Function norm 9.401107464556e-02 16 SNES Function norm 8.834148211449e-02 17 SNES Function norm 8.258574767211e-02 18 SNES Function norm 7.686203062639e-02 19 SNES Function norm 7.076716424793e-02 20 SNES Function norm 6.389435437215e-02 21 SNES Function norm 5.587963465672e-02 22 SNES Function norm 4.597296586383e-02 23 SNES Function norm 3.649153890859e-02 24 SNES Function norm 1.112024137036e-06 25 SNES Function norm 1.368410750160e-14 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 25 8 TS dt 7.74608 time 10.3345 0 SNES Function norm 2.851716167826e-01 1 SNES Function norm 2.827877334190e-01 2 SNES Function norm 2.803985313138e-01 3 SNES Function norm 2.780035928632e-01 4 SNES Function norm 2.756024874780e-01 5 SNES Function norm 2.731947705882e-01 6 SNES Function norm 2.707799825757e-01 7 SNES Function norm 2.683576476261e-01 8 SNES Function norm 2.659272724924e-01 9 SNES Function norm 2.634883451605e-01 10 SNES Function norm 2.610403334061e-01 11 SNES Function norm 2.585826832306e-01 12 SNES Function norm 2.561148171641e-01 13 SNES Function norm 2.536361324179e-01 14 SNES Function norm 2.511459988719e-01 15 SNES Function norm 2.486437568755e-01 16 SNES Function norm 2.461287148411e-01 17 SNES Function norm 2.461031613348e-01 18 SNES Function norm 2.455705375830e-01 19 SNES Function norm 2.445878823765e-01 20 SNES Function norm 2.432027776816e-01 21 SNES Function norm 2.414549711706e-01 22 SNES Function norm 2.393776915919e-01 23 SNES Function norm 2.369987079777e-01 24 SNES Function norm 2.343411787922e-01 25 SNES Function norm 2.314243298882e-01 26 SNES Function norm 2.282639927909e-01 27 SNES Function norm 2.248730282013e-01 28 SNES Function norm 2.212616539034e-01 29 SNES Function norm 2.174376914167e-01 30 SNES Function norm 2.134067415679e-01 31 SNES Function norm 2.091722954520e-01 32 SNES Function norm 2.047263028819e-01 33 SNES Function norm 2.000468148325e-01 34 SNES Function norm 1.951751560520e-01 35 SNES Function norm 1.946187817680e-01 36 SNES Function norm 1.912149142781e-01 37 SNES Function norm 1.856377137631e-01 38 SNES Function norm 1.784574187852e-01 39 SNES Function norm 1.701485679136e-01 40 SNES Function norm 1.611004146461e-01 41 SNES Function norm 1.516277627394e-01 42 SNES Function norm 1.419813980321e-01 43 SNES Function norm 1.323577340278e-01 44 SNES Function norm 1.224961331340e-01 45 SNES Function norm 1.115622228182e-01 46 SNES Function norm 9.910072225366e-02 47 SNES Function norm 8.425984823824e-02 48 SNES Function norm 6.475245098809e-02 49 SNES Function norm 2.817391391875e-02 50 SNES Function norm 2.187931938195e-06 Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 50 9 TS dt 1.93652 time 10.3345 steps 9 (1 rejected, 1 SNES fails), ftime 10.3345, nonlinits 126, linits 126 Although the program did not report errors, but end the operation after the eighth time step, and output results. Note that the program did not reach the set termination time (-ts_max_time 15). This indicates that the command *-ts_error_if_step_fails 0*, while avoiding program errors, ends TS solving after a failed SNES and does not proceed to the next step. And I'd like to continue the next step with the result after a failed SNES time step (although this result is not converged). In addition, I would like to ask the function of these two functions: TSSetMaxStepRejections and TSSetMaxSNESFailures (). I looked it up in manualpages/singleindex.html, but there were very few explanations, and I still didn't quite understand the meaning of Rejection and SNES Failures variables. Thank you very much for your reply and I hope you can help me to answer these questions. Thanks, Yingjie Zhang, Hong ?2020?4?30??? ??1:22??? > Please do not drop the mailing list when replying. > > It looks like the max steps or the final time has been set to zero for TS. > You might want to check your code to see if the TS settings are correct or > if they are overwritten in some callback functions by mistake. > > Hong (Mr.) > > > On Apr 29, 2020, at 10:16 PM, Yingjie Wu wrote: > > > > Command: mpiexec -n 1 ./SGts -snes_fd -pc_type lu > -ts_error_if_step_fails 0 -ts_monitor -snes_monitor \ > > -ksp_monitor \ > > -ksp_converged_reason \ > > -snes_converged_reason \ > > -snes_rtol 1.e-2 \ > > -ksp_rtol 1.e-5 \ > > -snes_max_it 2 \ > > -snes_view > > Output: Timestep 0: > > CurrentTime 0.: > > 0 TS dt 1. time 0. > > iter = 0, SNES Function norm 245.204 > > 0 SNES Function norm 2.452043863308e+02 > > 0 KSP Residual norm 1.641317090793e+04 > > 1 KSP Residual norm 2.717662845608e-10 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > iter = 1, SNES Function norm 170.58 > > 1 SNES Function norm 1.705802879797e+02 > > 0 KSP Residual norm 1.146216600447e+04 > > 1 KSP Residual norm 3.272076151518e-11 > > Linear solve converged due to CONVERGED_RTOL iterations 1 > > iter = 2, SNES Function norm 140.625 > > 2 SNES Function norm 1.406249065994e+02 > > Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 2 > > SNES Object: 1 MPI processes > > type: newtonls maximum iterations=2, maximum function > evaluations=-1530494976 > > tolerances: relative=0.01, absolute=1e-50, solution=1e-08 > > total number of linear solver iterations=2 > > total number of function evaluations=1063 > > norm schedule ALWAYS > > Jacobian is built using finite differences one column at a time > > SNESLineSearch Object: 1 MPI processes > > type: bt > > interpolation: cubic > > alpha=1.000000e-04 > > maxstep=1.000000e+08, minlambda=1.000000e-12 > > tolerances: relative=1.000000e-08, absolute=1.000000e-15, > lambda=1.000000e-08 > > maximum iterations=40 > > KSP Object: 1 MPI processes > > type: gmres > > restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > happy breakdown tolerance 1e-30 > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: lu > > out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 5., needed 2.84376 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij rows=528, cols=528 > > package used to perform factorization: petsc > > total: nonzeros=7863, allocated nonzeros=7863 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: seqaij rows=528, cols=528 > > total: nonzeros=2765, allocated nonzeros=5280 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > Timestep 1: > > CurrentTime 0.: > > 1 TS dt 1. time 0. > > > > Number of timesteps = 1 final time 0.00e+00 > > > > After the first time step, the program directly ends the TS solution > process. I've set up 50 time steps, so the program is not running properly. > > Although the program accepted that first time step SNES did not achieve > convergence, and there is no error information, but did not continue to go > to the next time step calculation, chose to end the program. I hope that > when the SNES not converge at a certain time step and reaches the set > maximum number SNES iteration steps, it can go to the next time step > calculation. > > I am looking forward to your suggestion. > > > > Thanks, > > Yingjie > > > > Zhang, Hong ?2020?4?29??? ??11:26??? > > Please send the list of your command line options and the screen output > with -ts_monitor. > > > > Hong (Mr.) > > > >> On Apr 28, 2020, at 11:00 PM, Yingjie Wu wrote: > >> > >> Thank you very much for your reply. > >> I tried both switches, but unfortunately they didn't seem to meet my > needs. > >> -ts_adapt_always_accept > >> The switch doesn't seem to work, reporting errors > >> when the maximum number of steps is reached without convergence, then > the program exits. > >> > >> -ts_error_if_step_fails 0 This switch accepts the > non-convergence time step and outputs the result, but does not continue > into the next time step calculation ( > >> > >> The time step hasn't reached the maximum time step I set > >> ). > >> > >> And I wonder if the variable behind this switch is optional? What does > it mean? > >> I hope to achieve in the case of non-convergence Newton step( for > example, the maximum number of Newton iteration steps reached -snes_max_it > 50), can go in the next time step calculation. > >> > >> Thanks, > >> Yingjie > >> > >> Zhang, Hong ?2020?4?28??? ??10:51??? > >> -ts_error_if_step_fails 0 > >> > >> You might want to find out why the nonlinear solver does not converge > first. If you have a hand-written Jacobian, you can validate it with > -snes_test_jacobian 1 (for a small test case). > >> > >> Hong (Mr.) > >> > >> > On Apr 28, 2020, at 8:21 AM, Yingjie Wu wrote: > >> > > >> > Dear PETSc developers > >> > Hi, > >> > > >> > I have recently used TS to solve nonlinear equations with time terms. > since the convergence of my model is not very good, i would like to set to > iterative fixed nonlinear steps per time step. If the problem does not meet > the SNES convergence criteria after fixed number of nonlinear steps , then > go to the next time step calculation. I tried -snes_max_it , but didn't > achieve the effect I wanted, and the program stopped after iterating the > fixed number of steps. How should I set up in the program? > >> > > >> > Thanks, > >> > Yingjie > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From awalker at student.ethz.ch Thu Apr 30 09:41:54 2020 From: awalker at student.ethz.ch (Walker Andreas) Date: Thu, 30 Apr 2020 14:41:54 +0000 Subject: [petsc-users] Performance of SLEPc's Krylov-Schur solver Message-ID: <86B05A0E-87C4-4B23-AC8B-6C39E6538B84@student.ethz.ch> Hello everyone, I have used SLEPc successfully on a FEM-related project. Even though it is very powerful overall, the speedup I measure is a bit below my expectations. Compared to using a single core, the speedup is for example around 1.8 for two cores but only maybe 50-60 for 128 cores and maybe 70 or 80 for 256 cores. Some details about my problem: - The problem is based on meshes with up to 400k degrees of freedom. DMPlex is used for organizing it. - ParMetis is used to partition the mesh. This yields a stiffness matrix where the vast majority of entries is in the diagonal blocks (i.e. looking at the rows owned by a core, there is a very dense square-shaped region around the diagonal and some loosely scattered nozeroes in the other columns). - The actual matrix from which I need eigenvalues is a 2x2 block matrix, saved as MATNEST - matrix. Each of these four matrices is computed based on the stiffness matrix and has a similar size and nonzero pattern. For a mesh of 200k dofs, one such matrix has a size of about 174kx174k and on average about 40 nonzeroes per row. - I use the default Krylov-Schur solver and look for the 100 smallest eigenvalues - The output of -log_view for the 200k-dof - mesh described above run on 128 cores is at the end of this mail. I noticed that the problem matrices are not perfectly balanced, i.e. the number of rows per core might vary between 2500 and 3000, for example. But I am not sure if this is the main reason for the poor speedup. I tried to reduce the subspace size but without effect. I also attempted to use the shift-and-invert spectral transformation but the MATNEST-type prevents this. Are there any suggestions to improve the speedup further or is this the maximum speedup that I can expect? Thanks a lot in advance, Andreas Walker m&m group D-MAVT ETH Zurich ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./Solver on a named eu-g1-050-2 with 128 processors, by awalker Thu Apr 30 15:50:22 2020 Using Petsc Release Version 3.10.5, Mar, 28, 2019 Max Max/Min Avg Total Time (sec): 6.209e+02 1.000 6.209e+02 Objects: 6.068e+05 1.001 6.063e+05 Flop: 9.230e+11 1.816 7.212e+11 9.231e+13 Flop/sec: 1.487e+09 1.816 1.161e+09 1.487e+11 MPI Messages: 1.451e+07 2.999 8.265e+06 1.058e+09 MPI Message Lengths: 6.062e+09 2.011 5.029e+02 5.321e+11 MPI Reductions: 1.512e+06 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 6.2090e+02 100.0% 9.2309e+13 100.0% 1.058e+09 100.0% 5.029e+02 100.0% 1.512e+06 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 20 1.0 2.3249e-01 2.2 0.00e+00 0.0 2.2e+04 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BuildTwoSidedF 317 1.0 8.5016e-01 4.8 0.00e+00 0.0 2.1e+04 1.4e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMult 150986 1.0 2.1963e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 1.2e+06 31 9100100 80 31 9100100 80 37007 MatMultAdd 603944 1.0 1.6209e+02 1.4 8.07e+10 1.8 1.1e+09 5.0e+02 0.0e+00 23 9100100 0 23 9100100 0 50145 MatConvert 30 1.0 1.6488e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 10 1.0 1.0347e-03 3.9 6.68e+05 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 65036 MatAssemblyBegin 916 1.0 8.6715e-01 1.4 0.00e+00 0.0 2.1e+04 1.4e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 916 1.0 2.0682e-01 1.1 0.00e+00 0.0 4.7e+05 1.3e+02 1.5e+03 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 42 1.0 7.2787e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 10 1.0 1.4816e+00 1.0 0.00e+00 0.0 6.4e+03 1.3e+05 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 40 1.0 1.0752e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatTranspose 80 1.0 3.0198e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 60 1.0 3.0391e-01 1.0 7.82e+06 1.6 3.8e+05 2.8e+02 7.8e+02 0 0 0 0 0 0 0 0 0 0 2711 MatMatMultSym 60 1.0 2.4238e-01 1.0 0.00e+00 0.0 3.3e+05 2.4e+02 7.2e+02 0 0 0 0 0 0 0 0 0 0 0 MatMatMultNum 60 1.0 5.8508e-02 1.0 7.82e+06 1.6 4.7e+04 5.7e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 14084 MatPtAP 40 1.0 4.5617e-01 1.0 1.59e+07 1.6 3.3e+05 1.0e+03 6.4e+02 0 0 0 0 0 0 0 0 0 0 3649 MatPtAPSymbolic 40 1.0 2.6002e-01 1.0 0.00e+00 0.0 1.7e+05 6.5e+02 2.8e+02 0 0 0 0 0 0 0 0 0 0 0 MatPtAPNumeric 40 1.0 1.9293e-01 1.0 1.59e+07 1.6 1.5e+05 1.5e+03 3.2e+02 0 0 0 0 0 0 0 0 0 0 8629 MatTrnMatMult 40 1.0 2.3801e-01 1.0 6.09e+06 1.8 1.8e+05 1.0e+03 6.4e+02 0 0 0 0 0 0 0 0 0 0 2442 MatTrnMatMultSym 40 1.0 1.6962e-01 1.0 0.00e+00 0.0 1.7e+05 4.4e+02 6.4e+02 0 0 0 0 0 0 0 0 0 0 0 MatTrnMatMultNum 40 1.0 6.9000e-02 1.0 6.09e+06 1.8 9.7e+03 1.1e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 8425 MatGetLocalMat 240 1.0 4.9149e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetBrAoCol 160 1.0 2.0470e-02 1.6 0.00e+00 0.0 3.3e+05 4.1e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatTranspose_SeqAIJ_FAST 80 1.0 2.9940e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 Mesh Partition 1 1.0 1.4825e+00 1.0 0.00e+00 0.0 9.8e+04 6.9e+01 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 Mesh Migration 1 1.0 3.6680e-02 1.0 0.00e+00 0.0 1.5e+03 1.4e+04 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 DMPlexDistribute 1 1.0 1.5269e+00 1.0 0.00e+00 0.0 1.0e+05 3.5e+02 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 DMPlexDistCones 1 1.0 1.8845e-02 1.2 0.00e+00 0.0 1.0e+03 1.7e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DMPlexDistLabels 1 1.0 9.7280e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 DMPlexDistData 1 1.0 3.1499e-01 1.4 0.00e+00 0.0 9.8e+04 4.3e+01 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DMPlexStratify 2 1.0 9.3421e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 DMPlexPrealloc 2 1.0 3.5980e-02 1.0 0.00e+00 0.0 4.0e+04 1.8e+03 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 SFSetGraph 20 1.0 1.6069e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 20 1.0 2.8043e-01 1.9 0.00e+00 0.0 6.7e+04 5.0e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastBegin 25 1.0 3.9653e-02 2.5 0.00e+00 0.0 6.1e+04 4.9e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastEnd 25 1.0 9.0128e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFReduceBegin 10 1.0 4.3473e-04 5.5 0.00e+00 0.0 7.4e+03 4.0e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFReduceEnd 10 1.0 5.7962e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFFetchOpBegin 2 1.0 1.6069e-0434.7 0.00e+00 0.0 1.8e+03 4.4e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFFetchOpEnd 2 1.0 8.9251e-04 2.6 0.00e+00 0.0 1.8e+03 4.4e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 302179 1.0 1.3128e+00 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyBegin 1 1.0 1.3844e-03 7.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 1 1.0 3.4710e-05 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 603945 1.0 2.2874e+01 4.4 0.00e+00 0.0 1.1e+09 5.0e+02 1.0e+00 2 0100100 0 2 0100100 0 0 VecScatterEnd 603944 1.0 8.2651e+01 4.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 VecSetRandom 11 1.0 2.7061e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 EPSSetUp 10 1.0 5.0371e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+01 0 0 0 0 0 0 0 0 0 0 0 EPSSolve 10 1.0 6.1329e+02 1.0 9.23e+11 1.8 1.1e+09 5.0e+02 1.5e+06 99100100100100 99100100100100 150509 STSetUp 10 1.0 2.5475e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 STApply 150986 1.0 2.1997e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 1.2e+06 31 9100100 80 31 9100100 80 36950 BVCopy 1791 1.0 5.1953e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BVMultVec 301925 1.0 1.5007e+02 3.1 3.31e+11 1.8 0.0e+00 0.0e+00 0.0e+00 14 36 0 0 0 14 36 0 0 0 220292 BVMultInPlace 1801 1.0 8.0080e+00 1.8 1.78e+11 1.8 0.0e+00 0.0e+00 0.0e+00 1 19 0 0 0 1 19 0 0 0 2222543 BVDotVec 301925 1.0 3.2807e+02 1.4 3.33e+11 1.8 0.0e+00 0.0e+00 3.0e+05 47 36 0 0 20 47 36 0 0 20 101409 BVOrthogonalizeV 150996 1.0 4.0292e+02 1.1 6.64e+11 1.8 0.0e+00 0.0e+00 3.0e+05 62 72 0 0 20 62 72 0 0 20 164619 BVScale 150996 1.0 4.1660e-01 3.2 5.27e+08 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 126494 BVSetRandom 10 1.0 2.5061e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSSolve 1801 1.0 2.0764e+01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 DSVectors 2779 1.0 1.2691e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSOther 1801 1.0 1.2944e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 1 1 584 0. Distributed Mesh 6 6 29160 0. GraphPartitioner 2 2 1244 0. Matrix 1104 1104 136615232 0. Index Set 930 930 9125912 0. IS L to G Mapping 3 3 2235608 0. Section 28 26 18720 0. Star Forest Graph 30 30 25632 0. Discrete System 6 6 5616 0. PetscRandom 11 11 7194 0. Vector 604372 604372 8204816368 0. Vec Scatter 203 203 272192 0. Viewer 21 10 8480 0. EPS Solver 10 10 86360 0. Spectral Transform 10 10 8400 0. Basis Vectors 10 10 530848 0. Region 10 10 6800 0. Direct Solver 10 10 9838880 0. Krylov Solver 10 10 13920 0. Preconditioner 10 10 10080 0. ======================================================================================================================== Average time to get PetscTime(): 3.49944e-08 Average time for MPI_Barrier(): 5.842e-06 Average time for zero size MPI_Send(): 8.72551e-06 #PETSc Option Table entries: -config=benchmark3.json -log_view #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" FFLAGS= CXXFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" --with-cc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc --with-cxx=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpic++ --with-fc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 --with-precision=double --with-scalar-type=real --with-shared-libraries=1 --with-debugging=0 --with-64-bit-indices=0 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib/libopenblas.so --with-x=0 --with-cxx-dialect=C++11 --with-boost=1 --with-clanguage=C --with-scalapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib/libscalapack.so --with-scalapack=1 --with-metis=1 --with-metis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk --with-hdf5=1 --with-hdf5-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5 --with-hypre=1 --with-hypre-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne --with-parmetis=1 --with-parmetis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4 --with-mumps=1 --with-mumps-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b --with-trilinos=1 --with-trilinos-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo --with-fftw=0 --with-cxx-dialect=C++11 --with-superlu_dist-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include --with-superlu_dist-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib/libsuperlu_dist.a --with-superlu_dist=1 --with-suitesparse-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include --with-suitesparse-lib="/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libumfpack.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libklu.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcholmod.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libbtf.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libccolamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcolamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libsuitesparseconfig.so /lib64/librt.so" --with-suitesparse=1 --with-zlib-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include --with-zlib-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib/libz.so --with-zlib=1 ----------------------------------------- Libraries compiled on 2020-01-22 15:21:53 on eu-c7-051-02 Machine characteristics: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core Using PETSc directory: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit Using PETSc arch: ----------------------------------------- Using C compiler: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc -ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2 Using Fortran compiler: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 ----------------------------------------- Using include paths: -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include ----------------------------------------- Using C linker: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc Using Fortran linker: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 Using libraries: -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib -lpetsc -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib /lib64/librt.so -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib -Wl,-rpath,/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib -L/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib:/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib -lmuelu-adapters -lmuelu-interface -lmuelu -lstratimikos -lstratimikosbelos -lstratimikosaztecoo -lstratimikosamesos -lstratimikosml -lstratimikosifpack -lModeLaplace -lanasaziepetra -lanasazi -lmapvarlib -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus -lmapvarlib -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus -lbelosxpetra -lbelosepetra -lbelos -lml -lifpack -lpamgen_extras -lpamgen -lamesos -lgaleri-xpetra -lgaleri-epetra -laztecoo -lisorropia -lxpetra-sup -lxpetra -lthyraepetraext -lthyraepetra -lthyracore -lthyraepetraext -lthyraepetra -lthyracore -lepetraext -ltrilinosss -ltriutils -lzoltan -lepetra -lsacado -lrtop -lkokkoskernels -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchosparser -lteuchoscore -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchosparser -lteuchoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -lgtest -lpthread -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu_dist -lHYPRE -lopenblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lstdc++ -ldl ----------------------------------------- From knepley at gmail.com Thu Apr 30 10:14:34 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Apr 2020 11:14:34 -0400 Subject: [petsc-users] Performance of SLEPc's Krylov-Schur solver In-Reply-To: <86B05A0E-87C4-4B23-AC8B-6C39E6538B84@student.ethz.ch> References: <86B05A0E-87C4-4B23-AC8B-6C39E6538B84@student.ethz.ch> Message-ID: On Thu, Apr 30, 2020 at 10:55 AM Walker Andreas wrote: > Hello everyone, > > I have used SLEPc successfully on a FEM-related project. Even though it is > very powerful overall, the speedup I measure is a bit below my > expectations. Compared to using a single core, the speedup is for example > around 1.8 for two cores but only maybe 50-60 for 128 cores and maybe 70 or > 80 for 256 cores. Some details about my problem: > > - The problem is based on meshes with up to 400k degrees of freedom. > DMPlex is used for organizing it. > - ParMetis is used to partition the mesh. This yields a stiffness matrix > where the vast majority of entries is in the diagonal blocks (i.e. looking > at the rows owned by a core, there is a very dense square-shaped region > around the diagonal and some loosely scattered nozeroes in the other > columns). > - The actual matrix from which I need eigenvalues is a 2x2 block matrix, > saved as MATNEST - matrix. Each of these four matrices is computed based on > the stiffness matrix and has a similar size and nonzero pattern. For a mesh > of 200k dofs, one such matrix has a size of about 174kx174k and on average > about 40 nonzeroes per row. > - I use the default Krylov-Schur solver and look for the 100 smallest > eigenvalues > - The output of -log_view for the 200k-dof - mesh described above run on > 128 cores is at the end of this mail. > > I noticed that the problem matrices are not perfectly balanced, i.e. the > number of rows per core might vary between 2500 and 3000, for example. But > I am not sure if this is the main reason for the poor speedup. > > I tried to reduce the subspace size but without effect. I also attempted > to use the shift-and-invert spectral transformation but the MATNEST-type > prevents this. > > Are there any suggestions to improve the speedup further or is this the > maximum speedup that I can expect? > Can you also give us the performance for this problem on one node using the same number of cores per node? Then we can calculate speedup and look at which functions are not speeding up. Thanks, Matt > Thanks a lot in advance, > > Andreas Walker > > m&m group > D-MAVT > ETH Zurich > > > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r > -fCourier9' to print this document *** > > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance Summary: > ---------------------------------------------- > > ./Solver on a named eu-g1-050-2 with 128 processors, by awalker Thu Apr > 30 15:50:22 2020 > Using Petsc Release Version 3.10.5, Mar, 28, 2019 > > Max Max/Min Avg Total > Time (sec): 6.209e+02 1.000 6.209e+02 > Objects: 6.068e+05 1.001 6.063e+05 > Flop: 9.230e+11 1.816 7.212e+11 9.231e+13 > Flop/sec: 1.487e+09 1.816 1.161e+09 1.487e+11 > MPI Messages: 1.451e+07 2.999 8.265e+06 1.058e+09 > MPI Message Lengths: 6.062e+09 2.011 5.029e+02 5.321e+11 > MPI Reductions: 1.512e+06 1.000 > > Flop counting convention: 1 flop = 1 real number operation of type > (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N > --> 2N flop > and VecAXPY() for complex vectors of length N > --> 8N flop > > Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages > --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total Count > %Total Avg %Total Count %Total > 0: Main Stage: 6.2090e+02 100.0% 9.2309e+13 100.0% 1.058e+09 > 100.0% 5.029e+02 100.0% 1.512e+06 100.0% > > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on > interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flop: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > AvgLen: average message length (bytes) > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and > PetscLogStagePop(). > %T - percent time in this phase %F - percent flop in this > phase > %M - percent messages in this phase %L - percent message lengths > in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over > all processors) > > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flop > --- Global --- --- Stage ---- Total > Max Ratio Max Ratio Max Ratio Mess AvgLen > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > BuildTwoSided 20 1.0 2.3249e-01 2.2 0.00e+00 0.0 2.2e+04 4.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > BuildTwoSidedF 317 1.0 8.5016e-01 4.8 0.00e+00 0.0 2.1e+04 1.4e+04 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMult 150986 1.0 2.1963e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 > 1.2e+06 31 9100100 80 31 9100100 80 37007 > MatMultAdd 603944 1.0 1.6209e+02 1.4 8.07e+10 1.8 1.1e+09 5.0e+02 > 0.0e+00 23 9100100 0 23 9100100 0 50145 > MatConvert 30 1.0 1.6488e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatScale 10 1.0 1.0347e-03 3.9 6.68e+05 1.8 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 65036 > MatAssemblyBegin 916 1.0 8.6715e-01 1.4 0.00e+00 0.0 2.1e+04 1.4e+04 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyEnd 916 1.0 2.0682e-01 1.1 0.00e+00 0.0 4.7e+05 1.3e+02 > 1.5e+03 0 0 0 0 0 0 0 0 0 0 0 > MatZeroEntries 42 1.0 7.2787e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatView 10 1.0 1.4816e+00 1.0 0.00e+00 0.0 6.4e+03 1.3e+05 > 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 > MatAXPY 40 1.0 1.0752e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatTranspose 80 1.0 3.0198e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMatMult 60 1.0 3.0391e-01 1.0 7.82e+06 1.6 3.8e+05 2.8e+02 > 7.8e+02 0 0 0 0 0 0 0 0 0 0 2711 > MatMatMultSym 60 1.0 2.4238e-01 1.0 0.00e+00 0.0 3.3e+05 2.4e+02 > 7.2e+02 0 0 0 0 0 0 0 0 0 0 0 > MatMatMultNum 60 1.0 5.8508e-02 1.0 7.82e+06 1.6 4.7e+04 5.7e+02 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 14084 > MatPtAP 40 1.0 4.5617e-01 1.0 1.59e+07 1.6 3.3e+05 1.0e+03 > 6.4e+02 0 0 0 0 0 0 0 0 0 0 3649 > MatPtAPSymbolic 40 1.0 2.6002e-01 1.0 0.00e+00 0.0 1.7e+05 6.5e+02 > 2.8e+02 0 0 0 0 0 0 0 0 0 0 0 > MatPtAPNumeric 40 1.0 1.9293e-01 1.0 1.59e+07 1.6 1.5e+05 1.5e+03 > 3.2e+02 0 0 0 0 0 0 0 0 0 0 8629 > MatTrnMatMult 40 1.0 2.3801e-01 1.0 6.09e+06 1.8 1.8e+05 1.0e+03 > 6.4e+02 0 0 0 0 0 0 0 0 0 0 2442 > MatTrnMatMultSym 40 1.0 1.6962e-01 1.0 0.00e+00 0.0 1.7e+05 4.4e+02 > 6.4e+02 0 0 0 0 0 0 0 0 0 0 0 > MatTrnMatMultNum 40 1.0 6.9000e-02 1.0 6.09e+06 1.8 9.7e+03 1.1e+04 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 8425 > MatGetLocalMat 240 1.0 4.9149e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetBrAoCol 160 1.0 2.0470e-02 1.6 0.00e+00 0.0 3.3e+05 4.1e+02 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatTranspose_SeqAIJ_FAST 80 1.0 2.9940e-03 1.4 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > Mesh Partition 1 1.0 1.4825e+00 1.0 0.00e+00 0.0 9.8e+04 6.9e+01 > 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 > Mesh Migration 1 1.0 3.6680e-02 1.0 0.00e+00 0.0 1.5e+03 1.4e+04 > 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistribute 1 1.0 1.5269e+00 1.0 0.00e+00 0.0 1.0e+05 3.5e+02 > 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistCones 1 1.0 1.8845e-02 1.2 0.00e+00 0.0 1.0e+03 1.7e+04 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistLabels 1 1.0 9.7280e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 > 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistData 1 1.0 3.1499e-01 1.4 0.00e+00 0.0 9.8e+04 4.3e+01 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexStratify 2 1.0 9.3421e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexPrealloc 2 1.0 3.5980e-02 1.0 0.00e+00 0.0 4.0e+04 1.8e+03 > 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 > SFSetGraph 20 1.0 1.6069e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFSetUp 20 1.0 2.8043e-01 1.9 0.00e+00 0.0 6.7e+04 5.0e+02 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFBcastBegin 25 1.0 3.9653e-02 2.5 0.00e+00 0.0 6.1e+04 4.9e+02 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFBcastEnd 25 1.0 9.0128e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFReduceBegin 10 1.0 4.3473e-04 5.5 0.00e+00 0.0 7.4e+03 4.0e+03 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFReduceEnd 10 1.0 5.7962e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFFetchOpBegin 2 1.0 1.6069e-0434.7 0.00e+00 0.0 1.8e+03 4.4e+03 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFFetchOpEnd 2 1.0 8.9251e-04 2.6 0.00e+00 0.0 1.8e+03 4.4e+03 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 302179 1.0 1.3128e+00 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyBegin 1 1.0 1.3844e-03 7.3 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyEnd 1 1.0 3.4710e-05 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 603945 1.0 2.2874e+01 4.4 0.00e+00 0.0 1.1e+09 5.0e+02 > 1.0e+00 2 0100100 0 2 0100100 0 0 > VecScatterEnd 603944 1.0 8.2651e+01 4.5 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 > VecSetRandom 11 1.0 2.7061e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > EPSSetUp 10 1.0 5.0371e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 4.0e+01 0 0 0 0 0 0 0 0 0 0 0 > EPSSolve 10 1.0 6.1329e+02 1.0 9.23e+11 1.8 1.1e+09 5.0e+02 > 1.5e+06 99100100100100 99100100100100 150509 > STSetUp 10 1.0 2.5475e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > STApply 150986 1.0 2.1997e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 > 1.2e+06 31 9100100 80 31 9100100 80 36950 > BVCopy 1791 1.0 5.1953e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > BVMultVec 301925 1.0 1.5007e+02 3.1 3.31e+11 1.8 0.0e+00 0.0e+00 > 0.0e+00 14 36 0 0 0 14 36 0 0 0 220292 > BVMultInPlace 1801 1.0 8.0080e+00 1.8 1.78e+11 1.8 0.0e+00 0.0e+00 > 0.0e+00 1 19 0 0 0 1 19 0 0 0 2222543 > BVDotVec 301925 1.0 3.2807e+02 1.4 3.33e+11 1.8 0.0e+00 0.0e+00 > 3.0e+05 47 36 0 0 20 47 36 0 0 20 101409 > BVOrthogonalizeV 150996 1.0 4.0292e+02 1.1 6.64e+11 1.8 0.0e+00 0.0e+00 > 3.0e+05 62 72 0 0 20 62 72 0 0 20 164619 > BVScale 150996 1.0 4.1660e-01 3.2 5.27e+08 1.8 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 126494 > BVSetRandom 10 1.0 2.5061e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DSSolve 1801 1.0 2.0764e+01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 > DSVectors 2779 1.0 1.2691e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DSOther 1801 1.0 1.2944e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 > > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > Reports information only for process 0. > > --- Event Stage 0: Main Stage > > Container 1 1 584 0. > Distributed Mesh 6 6 29160 0. > GraphPartitioner 2 2 1244 0. > Matrix 1104 1104 136615232 0. > Index Set 930 930 9125912 0. > IS L to G Mapping 3 3 2235608 0. > Section 28 26 18720 0. > Star Forest Graph 30 30 25632 0. > Discrete System 6 6 5616 0. > PetscRandom 11 11 7194 0. > Vector 604372 604372 8204816368 0. > Vec Scatter 203 203 272192 0. > Viewer 21 10 8480 0. > EPS Solver 10 10 86360 0. > Spectral Transform 10 10 8400 0. > Basis Vectors 10 10 530848 0. > Region 10 10 6800 0. > Direct Solver 10 10 9838880 0. > Krylov Solver 10 10 13920 0. > Preconditioner 10 10 10080 0. > > ======================================================================================================================== > Average time to get PetscTime(): 3.49944e-08 > Average time for MPI_Barrier(): 5.842e-06 > Average time for zero size MPI_Send(): 8.72551e-06 > #PETSc Option Table entries: > -config=benchmark3.json > -log_view > #End of PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 > sizeof(PetscScalar) 8 sizeof(PetscInt) 4 > Configure options: > --prefix=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit > --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 > CFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" FFLAGS= > CXXFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" > --with-cc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc > --with-cxx=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpic++ > --with-fc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 > --with-precision=double --with-scalar-type=real --with-shared-libraries=1 > --with-debugging=0 --with-64-bit-indices=0 COPTFLAGS= FOPTFLAGS= > CXXOPTFLAGS= > --with-blaslapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib/libopenblas.so > --with-x=0 --with-cxx-dialect=C++11 --with-boost=1 --with-clanguage=C > --with-scalapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib/libscalapack.so > --with-scalapack=1 --with-metis=1 > --with-metis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk > --with-hdf5=1 > --with-hdf5-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5 > --with-hypre=1 > --with-hypre-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne > --with-parmetis=1 > --with-parmetis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4 > --with-mumps=1 > --with-mumps-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b > --with-trilinos=1 > --with-trilinos-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo > --with-fftw=0 --with-cxx-dialect=C++11 > --with-superlu_dist-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include > --with-superlu_dist-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib/libsuperlu_dist.a > --with-superlu_dist=1 > --with-suitesparse-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include > --with-suitesparse-lib="/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libumfpack.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libklu.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcholmod.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libbtf.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libccolamd.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcolamd.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcamd.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libamd.so > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libsuitesparseconfig.so > /lib64/librt.so" --with-suitesparse=1 > --with-zlib-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include > --with-zlib-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib/libz.so > --with-zlib=1 > ----------------------------------------- > Libraries compiled on 2020-01-22 15:21:53 on eu-c7-051-02 > Machine characteristics: > Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core > Using PETSc directory: > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit > Using PETSc arch: > ----------------------------------------- > > Using C compiler: > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc > -ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2 > Using Fortran compiler: > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 > > ----------------------------------------- > > Using include paths: > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/include > -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include > ----------------------------------------- > > Using C linker: > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc > Using Fortran linker: > /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 > Using libraries: > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib > -lpetsc > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib > /lib64/librt.so > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib > -Wl,-rpath,/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib > -L/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib:/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 > -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib > -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib > -lmuelu-adapters -lmuelu-interface -lmuelu -lstratimikos -lstratimikosbelos > -lstratimikosaztecoo -lstratimikosamesos -lstratimikosml > -lstratimikosifpack -lModeLaplace -lanasaziepetra -lanasazi -lmapvarlib > -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco > -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac > -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus > -lmapvarlib -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco > -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac > -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus > -lbelosxpetra -lbelosepetra -lbelos -lml -lifpack -lpamgen_extras -lpamgen > -lamesos -lgaleri-xpetra -lgaleri-epetra -laztecoo -lisorropia -lxpetra-sup > -lxpetra -lthyraepetraext -lthyraepetra -lthyracore -lthyraepetraext > -lthyraepetra -lthyracore -lepetraext -ltrilinosss -ltriutils -lzoltan > -lepetra -lsacado -lrtop -lkokkoskernels -lteuchoskokkoscomm > -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm > -lteuchosparameterlist -lteuchosparser -lteuchoscore -lteuchoskokkoscomm > -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm > -lteuchosparameterlist -lteuchosparser -lteuchoscore -lkokkosalgorithms > -lkokkoscontainers -lkokkoscore -lkokkosalgorithms -lkokkoscontainers > -lkokkoscore -lgtest -lpthread -lcmumps -ldmumps -lsmumps -lzmumps > -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd > -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu_dist -lHYPRE -lopenblas > -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz > -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi > -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lstdc++ -ldl > ----------------------------------------- > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Apr 30 10:23:14 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 30 Apr 2020 17:23:14 +0200 Subject: [petsc-users] Performance of SLEPc's Krylov-Schur solver In-Reply-To: References: <86B05A0E-87C4-4B23-AC8B-6C39E6538B84@student.ethz.ch> Message-ID: <09DBCFDA-4DBA-41DE-9891-6E4921DCFC3C@dsic.upv.es> Here are some questions and comments: - Is the speedup relative to EPSSolve or to the overall program? Suggest using times for EPSSolve (although the difference should be small). - Are you using a multithreaded BLAS? - How many iterations is the solver doing? You can check this easily by adding the command-line option -eps_converged_reason If the number of iterations differ a lot between different runs with different number of processes, it is most probably due to random initial vectors. You can avoid this by setting a fixed initial vector with EPSSetInitialSpace() or running with the command-line option -bv_reproducible_random Jose > El 30 abr 2020, a las 17:14, Matthew Knepley escribi?: > > On Thu, Apr 30, 2020 at 10:55 AM Walker Andreas wrote: > Hello everyone, > > I have used SLEPc successfully on a FEM-related project. Even though it is very powerful overall, the speedup I measure is a bit below my expectations. Compared to using a single core, the speedup is for example around 1.8 for two cores but only maybe 50-60 for 128 cores and maybe 70 or 80 for 256 cores. Some details about my problem: > > - The problem is based on meshes with up to 400k degrees of freedom. DMPlex is used for organizing it. > - ParMetis is used to partition the mesh. This yields a stiffness matrix where the vast majority of entries is in the diagonal blocks (i.e. looking at the rows owned by a core, there is a very dense square-shaped region around the diagonal and some loosely scattered nozeroes in the other columns). > - The actual matrix from which I need eigenvalues is a 2x2 block matrix, saved as MATNEST - matrix. Each of these four matrices is computed based on the stiffness matrix and has a similar size and nonzero pattern. For a mesh of 200k dofs, one such matrix has a size of about 174kx174k and on average about 40 nonzeroes per row. > - I use the default Krylov-Schur solver and look for the 100 smallest eigenvalues > - The output of -log_view for the 200k-dof - mesh described above run on 128 cores is at the end of this mail. > > I noticed that the problem matrices are not perfectly balanced, i.e. the number of rows per core might vary between 2500 and 3000, for example. But I am not sure if this is the main reason for the poor speedup. > > I tried to reduce the subspace size but without effect. I also attempted to use the shift-and-invert spectral transformation but the MATNEST-type prevents this. > > Are there any suggestions to improve the speedup further or is this the maximum speedup that I can expect? > > Can you also give us the performance for this problem on one node using the same number of cores per node? Then we can calculate speedup > and look at which functions are not speeding up. > > Thanks, > > Matt > > Thanks a lot in advance, > > Andreas Walker > > m&m group > D-MAVT > ETH Zurich > > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- > > ./Solver on a named eu-g1-050-2 with 128 processors, by awalker Thu Apr 30 15:50:22 2020 > Using Petsc Release Version 3.10.5, Mar, 28, 2019 > > Max Max/Min Avg Total > Time (sec): 6.209e+02 1.000 6.209e+02 > Objects: 6.068e+05 1.001 6.063e+05 > Flop: 9.230e+11 1.816 7.212e+11 9.231e+13 > Flop/sec: 1.487e+09 1.816 1.161e+09 1.487e+11 > MPI Messages: 1.451e+07 2.999 8.265e+06 1.058e+09 > MPI Message Lengths: 6.062e+09 2.011 5.029e+02 5.321e+11 > MPI Reductions: 1.512e+06 1.000 > > Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N --> 2N flop > and VecAXPY() for complex vectors of length N --> 8N flop > > Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total Count %Total Avg %Total Count %Total > 0: Main Stage: 6.2090e+02 100.0% 9.2309e+13 100.0% 1.058e+09 100.0% 5.029e+02 100.0% 1.512e+06 100.0% > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flop: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > AvgLen: average message length (bytes) > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). > %T - percent time in this phase %F - percent flop in this phase > %M - percent messages in this phase %L - percent message lengths in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flop --- Global --- --- Stage ---- Total > Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > BuildTwoSided 20 1.0 2.3249e-01 2.2 0.00e+00 0.0 2.2e+04 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > BuildTwoSidedF 317 1.0 8.5016e-01 4.8 0.00e+00 0.0 2.1e+04 1.4e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMult 150986 1.0 2.1963e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 1.2e+06 31 9100100 80 31 9100100 80 37007 > MatMultAdd 603944 1.0 1.6209e+02 1.4 8.07e+10 1.8 1.1e+09 5.0e+02 0.0e+00 23 9100100 0 23 9100100 0 50145 > MatConvert 30 1.0 1.6488e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatScale 10 1.0 1.0347e-03 3.9 6.68e+05 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 65036 > MatAssemblyBegin 916 1.0 8.6715e-01 1.4 0.00e+00 0.0 2.1e+04 1.4e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyEnd 916 1.0 2.0682e-01 1.1 0.00e+00 0.0 4.7e+05 1.3e+02 1.5e+03 0 0 0 0 0 0 0 0 0 0 0 > MatZeroEntries 42 1.0 7.2787e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatView 10 1.0 1.4816e+00 1.0 0.00e+00 0.0 6.4e+03 1.3e+05 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 > MatAXPY 40 1.0 1.0752e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatTranspose 80 1.0 3.0198e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMatMult 60 1.0 3.0391e-01 1.0 7.82e+06 1.6 3.8e+05 2.8e+02 7.8e+02 0 0 0 0 0 0 0 0 0 0 2711 > MatMatMultSym 60 1.0 2.4238e-01 1.0 0.00e+00 0.0 3.3e+05 2.4e+02 7.2e+02 0 0 0 0 0 0 0 0 0 0 0 > MatMatMultNum 60 1.0 5.8508e-02 1.0 7.82e+06 1.6 4.7e+04 5.7e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 14084 > MatPtAP 40 1.0 4.5617e-01 1.0 1.59e+07 1.6 3.3e+05 1.0e+03 6.4e+02 0 0 0 0 0 0 0 0 0 0 3649 > MatPtAPSymbolic 40 1.0 2.6002e-01 1.0 0.00e+00 0.0 1.7e+05 6.5e+02 2.8e+02 0 0 0 0 0 0 0 0 0 0 0 > MatPtAPNumeric 40 1.0 1.9293e-01 1.0 1.59e+07 1.6 1.5e+05 1.5e+03 3.2e+02 0 0 0 0 0 0 0 0 0 0 8629 > MatTrnMatMult 40 1.0 2.3801e-01 1.0 6.09e+06 1.8 1.8e+05 1.0e+03 6.4e+02 0 0 0 0 0 0 0 0 0 0 2442 > MatTrnMatMultSym 40 1.0 1.6962e-01 1.0 0.00e+00 0.0 1.7e+05 4.4e+02 6.4e+02 0 0 0 0 0 0 0 0 0 0 0 > MatTrnMatMultNum 40 1.0 6.9000e-02 1.0 6.09e+06 1.8 9.7e+03 1.1e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 8425 > MatGetLocalMat 240 1.0 4.9149e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetBrAoCol 160 1.0 2.0470e-02 1.6 0.00e+00 0.0 3.3e+05 4.1e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatTranspose_SeqAIJ_FAST 80 1.0 2.9940e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > Mesh Partition 1 1.0 1.4825e+00 1.0 0.00e+00 0.0 9.8e+04 6.9e+01 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 > Mesh Migration 1 1.0 3.6680e-02 1.0 0.00e+00 0.0 1.5e+03 1.4e+04 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistribute 1 1.0 1.5269e+00 1.0 0.00e+00 0.0 1.0e+05 3.5e+02 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistCones 1 1.0 1.8845e-02 1.2 0.00e+00 0.0 1.0e+03 1.7e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistLabels 1 1.0 9.7280e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexDistData 1 1.0 3.1499e-01 1.4 0.00e+00 0.0 9.8e+04 4.3e+01 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexStratify 2 1.0 9.3421e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DMPlexPrealloc 2 1.0 3.5980e-02 1.0 0.00e+00 0.0 4.0e+04 1.8e+03 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 > SFSetGraph 20 1.0 1.6069e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFSetUp 20 1.0 2.8043e-01 1.9 0.00e+00 0.0 6.7e+04 5.0e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFBcastBegin 25 1.0 3.9653e-02 2.5 0.00e+00 0.0 6.1e+04 4.9e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFBcastEnd 25 1.0 9.0128e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFReduceBegin 10 1.0 4.3473e-04 5.5 0.00e+00 0.0 7.4e+03 4.0e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFReduceEnd 10 1.0 5.7962e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFFetchOpBegin 2 1.0 1.6069e-0434.7 0.00e+00 0.0 1.8e+03 4.4e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SFFetchOpEnd 2 1.0 8.9251e-04 2.6 0.00e+00 0.0 1.8e+03 4.4e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 302179 1.0 1.3128e+00 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyBegin 1 1.0 1.3844e-03 7.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyEnd 1 1.0 3.4710e-05 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 603945 1.0 2.2874e+01 4.4 0.00e+00 0.0 1.1e+09 5.0e+02 1.0e+00 2 0100100 0 2 0100100 0 0 > VecScatterEnd 603944 1.0 8.2651e+01 4.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 > VecSetRandom 11 1.0 2.7061e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > EPSSetUp 10 1.0 5.0371e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+01 0 0 0 0 0 0 0 0 0 0 0 > EPSSolve 10 1.0 6.1329e+02 1.0 9.23e+11 1.8 1.1e+09 5.0e+02 1.5e+06 99100100100100 99100100100100 150509 > STSetUp 10 1.0 2.5475e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > STApply 150986 1.0 2.1997e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 1.2e+06 31 9100100 80 31 9100100 80 36950 > BVCopy 1791 1.0 5.1953e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > BVMultVec 301925 1.0 1.5007e+02 3.1 3.31e+11 1.8 0.0e+00 0.0e+00 0.0e+00 14 36 0 0 0 14 36 0 0 0 220292 > BVMultInPlace 1801 1.0 8.0080e+00 1.8 1.78e+11 1.8 0.0e+00 0.0e+00 0.0e+00 1 19 0 0 0 1 19 0 0 0 2222543 > BVDotVec 301925 1.0 3.2807e+02 1.4 3.33e+11 1.8 0.0e+00 0.0e+00 3.0e+05 47 36 0 0 20 47 36 0 0 20 101409 > BVOrthogonalizeV 150996 1.0 4.0292e+02 1.1 6.64e+11 1.8 0.0e+00 0.0e+00 3.0e+05 62 72 0 0 20 62 72 0 0 20 164619 > BVScale 150996 1.0 4.1660e-01 3.2 5.27e+08 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 126494 > BVSetRandom 10 1.0 2.5061e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DSSolve 1801 1.0 2.0764e+01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 > DSVectors 2779 1.0 1.2691e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > DSOther 1801 1.0 1.2944e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > Reports information only for process 0. > > --- Event Stage 0: Main Stage > > Container 1 1 584 0. > Distributed Mesh 6 6 29160 0. > GraphPartitioner 2 2 1244 0. > Matrix 1104 1104 136615232 0. > Index Set 930 930 9125912 0. > IS L to G Mapping 3 3 2235608 0. > Section 28 26 18720 0. > Star Forest Graph 30 30 25632 0. > Discrete System 6 6 5616 0. > PetscRandom 11 11 7194 0. > Vector 604372 604372 8204816368 0. > Vec Scatter 203 203 272192 0. > Viewer 21 10 8480 0. > EPS Solver 10 10 86360 0. > Spectral Transform 10 10 8400 0. > Basis Vectors 10 10 530848 0. > Region 10 10 6800 0. > Direct Solver 10 10 9838880 0. > Krylov Solver 10 10 13920 0. > Preconditioner 10 10 10080 0. > ======================================================================================================================== > Average time to get PetscTime(): 3.49944e-08 > Average time for MPI_Barrier(): 5.842e-06 > Average time for zero size MPI_Send(): 8.72551e-06 > #PETSc Option Table entries: > -config=benchmark3.json > -log_view > #End of PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 > Configure options: --prefix=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" FFLAGS= CXXFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" --with-cc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc --with-cxx=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpic++ --with-fc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 --with-precision=double --with-scalar-type=real --with-shared-libraries=1 --with-debugging=0 --with-64-bit-indices=0 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib/libopenblas.so --with-x=0 --with-cxx-dialect=C++11 --with-boost=1 --with-clanguage=C --with-scalapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib/libscalapack.so --with-scalapack=1 --with-metis=1 --with-metis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk --with-hdf5=1 --with-hdf5-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5 --with-hypre=1 --with-hypre-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne --with-parmetis=1 --with-parmetis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4 --with-mumps=1 --with-mumps-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b --with-trilinos=1 --with-trilinos-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo --with-fftw=0 --with-cxx-dialect=C++11 --with-superlu_dist-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include --with-superlu_dist-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib/libsuperlu_dist.a --with-superlu_dist=1 --with-suitesparse-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include --with-suitesparse-lib="/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libumfpack.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libklu.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcholmod.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libbtf.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libccolamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcolamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libsuitesparseconfig.so /lib64/librt.so" --with-suitesparse=1 --with-zlib-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include --with-zlib-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib/libz.so --with-zlib=1 > ----------------------------------------- > Libraries compiled on 2020-01-22 15:21:53 on eu-c7-051-02 > Machine characteristics: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core > Using PETSc directory: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit > Using PETSc arch: > ----------------------------------------- > > Using C compiler: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc -ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2 > Using Fortran compiler: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 > ----------------------------------------- > > Using include paths: -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include > ----------------------------------------- > > Using C linker: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc > Using Fortran linker: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 > Using libraries: -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib -lpetsc -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib /lib64/librt.so -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib -Wl,-rpath,/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib -L/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib:/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib -lmuelu-adapters -lmuelu-interface -lmuelu -lstratimikos -lstratimikosbelos -lstratimikosaztecoo -lstratimikosamesos -lstratimikosml -lstratimikosifpack -lModeLaplace -lanasaziepetra -lanasazi -lmapvarlib -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus -lmapvarlib -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus -lbelosxpetra -lbelosepetra -lbelos -lml -lifpack -lpamgen_extras -lpamgen -lamesos -lgaleri-xpetra -lgaleri-epetra -laztecoo -lisorropia -lxpetra-sup -lxpetra -lthyraepetraext -lthyraepetra -lthyracore -lthyraepetraext -lthyraepetra -lthyracore -lepetraext -ltrilinosss -ltriutils -lzoltan -lepetra -lsacado -lrtop -lkokkoskernels -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchosparser -lteuchoscore -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchosparser -lteuchoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -lgtest -lpthread -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu_dist -lHYPRE -lopenblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lstdc++ -ldl > ----------------------------------------- > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From hongzhang at anl.gov Thu Apr 30 13:20:58 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Thu, 30 Apr 2020 18:20:58 +0000 Subject: [petsc-users] Problems about tolerances set in TS In-Reply-To: References: <34442115-CFFE-4FFD-AFB5-05F24B7E546C@anl.gov> <2A440035-8BD5-4533-95D3-CC4DDB6DCA97@anl.gov> Message-ID: <07715E53-8134-4081-815D-B3921429C1D1@anl.gov> On Apr 30, 2020, at 6:22 AM, Yingjie Wu > wrote: I'm sorry for dropping the mailing list in previous mail. I went over the code and made sure I didn't set the maximum number or final time to zero for TS. And I found a very similar example in petsc/src/ts/tutorials/ex8.c. I made the following changes: 1. TSSetMaxStepRejections(ts,10); -> TSSetMaxStepRejections(ts,0); 2. TSSetMaxSNESFailures(ts,-1); -> TSSetMaxSNESFailures(ts,0); Now it is clear why your code terminates early. TSSetMaxStepRejections (-td_max_reject) specifies the maximum number of failed time steps that TS adapter allows. TSSetMaxSNESFailures (-ts_max_snes_failures) specifies the maximum number of failed nonlinear solves that TS adapter allows. The TS adapter can reject a time step and restart it with a decreased step size if the error criterion is not met. Diverged nonlinear solve is one of the reasons that cause a time step rejected. To avoid termination caused by these constraints, you can use -ts_max_snes_failures -1 -ts_max_reject -1 to set both of them unlimited. When the nonlinear solve diverges (e.g. the number of SNES iterations reaches the limit set by -snes_max_it), TS adapter will decrease the step size by a factor (can be changed with -ts_adapt_scale_solve_failed) and repeat the time step. Sometimes you may end up with excessively small steps if the repeated nonlinear solves keep struggling. In that case, I would turn off the adapter with -ts_adapt_type none and stick with the fixed time stepping or control the step size myself. Hong (Mr.) These changes adjust the Rejections and SNESFailures to the default state. Then test the following commands: mpiexec -n 1 ./ex8 -ts_atol 1e-2 -ts_rtol 1e-2 -ts_max_time 15 -ts_type arkimex -ts_arkimex_type 2e -problem_type orego -ts_arkimex_initial_guess_extrapolate 0 -ts_adapt_time_step_increase_delay 4 -ts_monitor -snes_monitor -snes_converged_reason The output on the screen is: (......) 7 TS dt 6.86235 time 3.47215 0 SNES Function norm 2.000433909025e-01 1 SNES Function norm 1.991123862317e-01 2 SNES Function norm 1.975627926780e-01 3 SNES Function norm 1.946057832068e-01 4 SNES Function norm 1.900556025613e-01 5 SNES Function norm 1.840284735193e-01 6 SNES Function norm 1.767727556549e-01 7 SNES Function norm 1.685779659858e-01 8 SNES Function norm 1.597281513554e-01 9 SNES Function norm 1.504796576996e-01 10 SNES Function norm 1.410521859210e-01 11 SNES Function norm 1.316269768177e-01 12 SNES Function norm 1.223486899848e-01 13 SNES Function norm 1.127512649756e-01 14 SNES Function norm 1.020567711262e-01 15 SNES Function norm 8.979527463515e-02 16 SNES Function norm 7.508859688939e-02 17 SNES Function norm 7.355450203066e-02 18 SNES Function norm 2.927834451754e-06 19 SNES Function norm 4.107891676938e-15 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 19 0 SNES Function norm 1.349799651143e-01 1 SNES Function norm 1.333235099246e-01 2 SNES Function norm 1.314928290684e-01 3 SNES Function norm 1.295004620848e-01 4 SNES Function norm 1.273561109747e-01 5 SNES Function norm 1.250670006052e-01 6 SNES Function norm 1.226381484156e-01 7 SNES Function norm 1.200604281861e-01 8 SNES Function norm 1.173313259697e-01 9 SNES Function norm 1.144768440013e-01 10 SNES Function norm 1.141593852677e-01 11 SNES Function norm 1.121207287132e-01 12 SNES Function norm 1.087640960519e-01 13 SNES Function norm 1.044397486143e-01 14 SNES Function norm 9.944173247962e-02 15 SNES Function norm 9.401107464556e-02 16 SNES Function norm 8.834148211449e-02 17 SNES Function norm 8.258574767211e-02 18 SNES Function norm 7.686203062639e-02 19 SNES Function norm 7.076716424793e-02 20 SNES Function norm 6.389435437215e-02 21 SNES Function norm 5.587963465672e-02 22 SNES Function norm 4.597296586383e-02 23 SNES Function norm 3.649153890859e-02 24 SNES Function norm 1.112024137036e-06 25 SNES Function norm 1.368410750160e-14 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 25 8 TS dt 7.74608 time 10.3345 0 SNES Function norm 2.851716167826e-01 1 SNES Function norm 2.827877334190e-01 2 SNES Function norm 2.803985313138e-01 3 SNES Function norm 2.780035928632e-01 4 SNES Function norm 2.756024874780e-01 5 SNES Function norm 2.731947705882e-01 6 SNES Function norm 2.707799825757e-01 7 SNES Function norm 2.683576476261e-01 8 SNES Function norm 2.659272724924e-01 9 SNES Function norm 2.634883451605e-01 10 SNES Function norm 2.610403334061e-01 11 SNES Function norm 2.585826832306e-01 12 SNES Function norm 2.561148171641e-01 13 SNES Function norm 2.536361324179e-01 14 SNES Function norm 2.511459988719e-01 15 SNES Function norm 2.486437568755e-01 16 SNES Function norm 2.461287148411e-01 17 SNES Function norm 2.461031613348e-01 18 SNES Function norm 2.455705375830e-01 19 SNES Function norm 2.445878823765e-01 20 SNES Function norm 2.432027776816e-01 21 SNES Function norm 2.414549711706e-01 22 SNES Function norm 2.393776915919e-01 23 SNES Function norm 2.369987079777e-01 24 SNES Function norm 2.343411787922e-01 25 SNES Function norm 2.314243298882e-01 26 SNES Function norm 2.282639927909e-01 27 SNES Function norm 2.248730282013e-01 28 SNES Function norm 2.212616539034e-01 29 SNES Function norm 2.174376914167e-01 30 SNES Function norm 2.134067415679e-01 31 SNES Function norm 2.091722954520e-01 32 SNES Function norm 2.047263028819e-01 33 SNES Function norm 2.000468148325e-01 34 SNES Function norm 1.951751560520e-01 35 SNES Function norm 1.946187817680e-01 36 SNES Function norm 1.912149142781e-01 37 SNES Function norm 1.856377137631e-01 38 SNES Function norm 1.784574187852e-01 39 SNES Function norm 1.701485679136e-01 40 SNES Function norm 1.611004146461e-01 41 SNES Function norm 1.516277627394e-01 42 SNES Function norm 1.419813980321e-01 43 SNES Function norm 1.323577340278e-01 44 SNES Function norm 1.224961331340e-01 45 SNES Function norm 1.115622228182e-01 46 SNES Function norm 9.910072225366e-02 47 SNES Function norm 8.425984823824e-02 48 SNES Function norm 6.475245098809e-02 49 SNES Function norm 2.817391391875e-02 50 SNES Function norm 2.187931938195e-06 Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 50 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: [0]PETSC ERROR: TSStep has failed due to DIVERGED_STEP_REJECTED [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.12.4, unknown [0]PETSC ERROR: ./ex8 on a arch-linux2-c-debug named ubuntu103 by wuyj Thu Apr 30 10:40:07 2020 [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich --download-fblaslapack [0]PETSC ERROR: #1 TSStep() line 3596 in /home/wuyj/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #2 TSSolve() line 3768 in /home/wuyj/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #3 main() line 416 in ex8.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -problem_type orego [0]PETSC ERROR: -snes_converged_reason [0]PETSC ERROR: -snes_monitor [0]PETSC ERROR: -ts_adapt_time_step_increase_delay 4 [0]PETSC ERROR: -ts_arkimex_initial_guess_extrapolate 0 [0]PETSC ERROR: -ts_arkimex_type 2e [0]PETSC ERROR: -ts_atol 1e-2 [0]PETSC ERROR: -ts_max_time 15 [0]PETSC ERROR: -ts_monitor [0]PETSC ERROR: -ts_rtol 1e-2 [0]PETSC ERROR: -ts_type arkimex [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 91) - process 0 At the eighth time step, the maximum iterations number of SNES resulting in the failure of the time step calculation, the program reported errors. Then I tested the following commands: mpiexec -n 1 ./ex8 -ts_atol 1e-2 -ts_rtol 1e-2 -ts_max_time 15 -ts_type arkimex -ts_arkimex_type 2e -problem_type orego -ts_arkimex_initial_guess_extrapolate 0 -ts_adapt_time_step_increase_delay 4 -snes_monitor -snes_converged_reason -ts_monitor -ts_error_if_step_fails 0 The output in screen: ?......? 7 TS dt 6.86235 time 3.47215 0 SNES Function norm 2.000433909025e-01 1 SNES Function norm 1.991123862317e-01 2 SNES Function norm 1.975627926780e-01 3 SNES Function norm 1.946057832068e-01 4 SNES Function norm 1.900556025613e-01 5 SNES Function norm 1.840284735193e-01 6 SNES Function norm 1.767727556549e-01 7 SNES Function norm 1.685779659858e-01 8 SNES Function norm 1.597281513554e-01 9 SNES Function norm 1.504796576996e-01 10 SNES Function norm 1.410521859210e-01 11 SNES Function norm 1.316269768177e-01 12 SNES Function norm 1.223486899848e-01 13 SNES Function norm 1.127512649756e-01 14 SNES Function norm 1.020567711262e-01 15 SNES Function norm 8.979527463515e-02 16 SNES Function norm 7.508859688939e-02 17 SNES Function norm 7.355450203066e-02 18 SNES Function norm 2.927834451754e-06 19 SNES Function norm 4.107891676938e-15 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 19 0 SNES Function norm 1.349799651143e-01 1 SNES Function norm 1.333235099246e-01 2 SNES Function norm 1.314928290684e-01 3 SNES Function norm 1.295004620848e-01 4 SNES Function norm 1.273561109747e-01 5 SNES Function norm 1.250670006052e-01 6 SNES Function norm 1.226381484156e-01 7 SNES Function norm 1.200604281861e-01 8 SNES Function norm 1.173313259697e-01 9 SNES Function norm 1.144768440013e-01 10 SNES Function norm 1.141593852677e-01 11 SNES Function norm 1.121207287132e-01 12 SNES Function norm 1.087640960519e-01 13 SNES Function norm 1.044397486143e-01 14 SNES Function norm 9.944173247962e-02 15 SNES Function norm 9.401107464556e-02 16 SNES Function norm 8.834148211449e-02 17 SNES Function norm 8.258574767211e-02 18 SNES Function norm 7.686203062639e-02 19 SNES Function norm 7.076716424793e-02 20 SNES Function norm 6.389435437215e-02 21 SNES Function norm 5.587963465672e-02 22 SNES Function norm 4.597296586383e-02 23 SNES Function norm 3.649153890859e-02 24 SNES Function norm 1.112024137036e-06 25 SNES Function norm 1.368410750160e-14 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 25 8 TS dt 7.74608 time 10.3345 0 SNES Function norm 2.851716167826e-01 1 SNES Function norm 2.827877334190e-01 2 SNES Function norm 2.803985313138e-01 3 SNES Function norm 2.780035928632e-01 4 SNES Function norm 2.756024874780e-01 5 SNES Function norm 2.731947705882e-01 6 SNES Function norm 2.707799825757e-01 7 SNES Function norm 2.683576476261e-01 8 SNES Function norm 2.659272724924e-01 9 SNES Function norm 2.634883451605e-01 10 SNES Function norm 2.610403334061e-01 11 SNES Function norm 2.585826832306e-01 12 SNES Function norm 2.561148171641e-01 13 SNES Function norm 2.536361324179e-01 14 SNES Function norm 2.511459988719e-01 15 SNES Function norm 2.486437568755e-01 16 SNES Function norm 2.461287148411e-01 17 SNES Function norm 2.461031613348e-01 18 SNES Function norm 2.455705375830e-01 19 SNES Function norm 2.445878823765e-01 20 SNES Function norm 2.432027776816e-01 21 SNES Function norm 2.414549711706e-01 22 SNES Function norm 2.393776915919e-01 23 SNES Function norm 2.369987079777e-01 24 SNES Function norm 2.343411787922e-01 25 SNES Function norm 2.314243298882e-01 26 SNES Function norm 2.282639927909e-01 27 SNES Function norm 2.248730282013e-01 28 SNES Function norm 2.212616539034e-01 29 SNES Function norm 2.174376914167e-01 30 SNES Function norm 2.134067415679e-01 31 SNES Function norm 2.091722954520e-01 32 SNES Function norm 2.047263028819e-01 33 SNES Function norm 2.000468148325e-01 34 SNES Function norm 1.951751560520e-01 35 SNES Function norm 1.946187817680e-01 36 SNES Function norm 1.912149142781e-01 37 SNES Function norm 1.856377137631e-01 38 SNES Function norm 1.784574187852e-01 39 SNES Function norm 1.701485679136e-01 40 SNES Function norm 1.611004146461e-01 41 SNES Function norm 1.516277627394e-01 42 SNES Function norm 1.419813980321e-01 43 SNES Function norm 1.323577340278e-01 44 SNES Function norm 1.224961331340e-01 45 SNES Function norm 1.115622228182e-01 46 SNES Function norm 9.910072225366e-02 47 SNES Function norm 8.425984823824e-02 48 SNES Function norm 6.475245098809e-02 49 SNES Function norm 2.817391391875e-02 50 SNES Function norm 2.187931938195e-06 Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 50 9 TS dt 1.93652 time 10.3345 steps 9 (1 rejected, 1 SNES fails), ftime 10.3345, nonlinits 126, linits 126 Although the program did not report errors, but end the operation after the eighth time step, and output results. Note that the program did not reach the set termination time (-ts_max_time 15). This indicates that the command -ts_error_if_step_fails 0, while avoiding program errors, ends TS solving after a failed SNES and does not proceed to the next step. And I'd like to continue the next step with the result after a failed SNES time step (although this result is not converged). In addition, I would like to ask the function of these two functions: TSSetMaxStepRejections and TSSetMaxSNESFailures(). I looked it up in manualpages/singleindex.html, but there were very few explanations, and I still didn't quite understand the meaning of Rejection and SNES Failures variables. Thank you very much for your reply and I hope you can help me to answer these questions. Thanks, Yingjie Zhang, Hong > ?2020?4?30??? ??1:22??? Please do not drop the mailing list when replying. It looks like the max steps or the final time has been set to zero for TS. You might want to check your code to see if the TS settings are correct or if they are overwritten in some callback functions by mistake. Hong (Mr.) > On Apr 29, 2020, at 10:16 PM, Yingjie Wu > wrote: > > Command: mpiexec -n 1 ./SGts -snes_fd -pc_type lu -ts_error_if_step_fails 0 -ts_monitor -snes_monitor \ > -ksp_monitor \ > -ksp_converged_reason \ > -snes_converged_reason \ > -snes_rtol 1.e-2 \ > -ksp_rtol 1.e-5 \ > -snes_max_it 2 \ > -snes_view > Output: Timestep 0: > CurrentTime 0.: > 0 TS dt 1. time 0. > iter = 0, SNES Function norm 245.204 > 0 SNES Function norm 2.452043863308e+02 > 0 KSP Residual norm 1.641317090793e+04 > 1 KSP Residual norm 2.717662845608e-10 > Linear solve converged due to CONVERGED_RTOL iterations 1 > iter = 1, SNES Function norm 170.58 > 1 SNES Function norm 1.705802879797e+02 > 0 KSP Residual norm 1.146216600447e+04 > 1 KSP Residual norm 3.272076151518e-11 > Linear solve converged due to CONVERGED_RTOL iterations 1 > iter = 2, SNES Function norm 140.625 > 2 SNES Function norm 1.406249065994e+02 > Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 2 > SNES Object: 1 MPI processes > type: newtonls maximum iterations=2, maximum function evaluations=-1530494976 > tolerances: relative=0.01, absolute=1e-50, solution=1e-08 > total number of linear solver iterations=2 > total number of function evaluations=1063 > norm schedule ALWAYS > Jacobian is built using finite differences one column at a time > SNESLineSearch Object: 1 MPI processes > type: bt > interpolation: cubic > alpha=1.000000e-04 > maxstep=1.000000e+08, minlambda=1.000000e-12 > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 > maximum iterations=40 > KSP Object: 1 MPI processes > type: gmres > restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: lu > out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 5., needed 2.84376 > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij rows=528, cols=528 > package used to perform factorization: petsc > total: nonzeros=7863, allocated nonzeros=7863 > total number of mallocs used during MatSetValues calls=0 > not using I-node routines > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij rows=528, cols=528 > total: nonzeros=2765, allocated nonzeros=5280 > total number of mallocs used during MatSetValues calls=0 > not using I-node routines > Timestep 1: > CurrentTime 0.: > 1 TS dt 1. time 0. > > Number of timesteps = 1 final time 0.00e+00 > > After the first time step, the program directly ends the TS solution process. I've set up 50 time steps, so the program is not running properly. > Although the program accepted that first time step SNES did not achieve convergence, and there is no error information, but did not continue to go to the next time step calculation, chose to end the program. I hope that when the SNES not converge at a certain time step and reaches the set maximum number SNES iteration steps, it can go to the next time step calculation. > I am looking forward to your suggestion. > > Thanks, > Yingjie > > Zhang, Hong > ?2020?4?29??? ??11:26??? > Please send the list of your command line options and the screen output with -ts_monitor. > > Hong (Mr.) > >> On Apr 28, 2020, at 11:00 PM, Yingjie Wu > wrote: >> >> Thank you very much for your reply. >> I tried both switches, but unfortunately they didn't seem to meet my needs. >> -ts_adapt_always_accept >> The switch doesn't seem to work, reporting errors >> when the maximum number of steps is reached without convergence, then the program exits. >> >> -ts_error_if_step_fails 0 This switch accepts the non-convergence time step and outputs the result, but does not continue into the next time step calculation ( >> >> The time step hasn't reached the maximum time step I set >> ). >> >> And I wonder if the variable behind this switch is optional? What does it mean? >> I hope to achieve in the case of non-convergence Newton step( for example, the maximum number of Newton iteration steps reached -snes_max_it 50), can go in the next time step calculation. >> >> Thanks, >> Yingjie >> >> Zhang, Hong > ?2020?4?28??? ??10:51??? >> -ts_error_if_step_fails 0 >> >> You might want to find out why the nonlinear solver does not converge first. If you have a hand-written Jacobian, you can validate it with -snes_test_jacobian 1 (for a small test case). >> >> Hong (Mr.) >> >> > On Apr 28, 2020, at 8:21 AM, Yingjie Wu > wrote: >> > >> > Dear PETSc developers >> > Hi, >> > >> > I have recently used TS to solve nonlinear equations with time terms. since the convergence of my model is not very good, i would like to set to iterative fixed nonlinear steps per time step. If the problem does not meet the SNES convergence criteria after fixed number of nonlinear steps , then go to the next time step calculation. I tried -snes_max_it , but didn't achieve the effect I wanted, and the program stopped after iterating the fixed number of steps. How should I set up in the program? >> > >> > Thanks, >> > Yingjie >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From awalker at student.ethz.ch Thu Apr 30 13:32:53 2020 From: awalker at student.ethz.ch (Walker Andreas) Date: Thu, 30 Apr 2020 18:32:53 +0000 Subject: [petsc-users] Performance of SLEPc's Krylov-Schur solver In-Reply-To: <09DBCFDA-4DBA-41DE-9891-6E4921DCFC3C@dsic.upv.es> References: <86B05A0E-87C4-4B23-AC8B-6C39E6538B84@student.ethz.ch> <09DBCFDA-4DBA-41DE-9891-6E4921DCFC3C@dsic.upv.es> Message-ID: Hi Jose, Thank you for the quick answer. - I solve 10 similar problems (i.e. only the numerical values of some entries differ), measure the time of EPSSolve each time and form the average. Then I do the same for a different number of cores and compare the averages. - I don?t use multithreaded BLAS. [Where/How could I use it exactly?] - I just tested 8, 16, 32, 64 and 128 cores (solved 10 problems on each number of cores) and in all cases, epssolve takes between 171 and 205 iterations for 100 eigenvalues. On average, I?d say they each number of cores leads to the same number of iterations. Best regards, Andreas > Am 30.04.2020 um 17:23 schrieb Jose E. Roman : > > > Here are some questions and comments: > > - Is the speedup relative to EPSSolve or to the overall program? Suggest using times for EPSSolve (although the difference should be small). > > - Are you using a multithreaded BLAS? > > - How many iterations is the solver doing? You can check this easily by adding the command-line option -eps_converged_reason > If the number of iterations differ a lot between different runs with different number of processes, it is most probably due to random initial vectors. You can avoid this by setting a fixed initial vector with EPSSetInitialSpace() or running with the command-line option -bv_reproducible_random > > Jose > > >> El 30 abr 2020, a las 17:14, Matthew Knepley escribi?: >> >> On Thu, Apr 30, 2020 at 10:55 AM Walker Andreas wrote: >> Hello everyone, >> >> I have used SLEPc successfully on a FEM-related project. Even though it is very powerful overall, the speedup I measure is a bit below my expectations. Compared to using a single core, the speedup is for example around 1.8 for two cores but only maybe 50-60 for 128 cores and maybe 70 or 80 for 256 cores. Some details about my problem: >> >> - The problem is based on meshes with up to 400k degrees of freedom. DMPlex is used for organizing it. >> - ParMetis is used to partition the mesh. This yields a stiffness matrix where the vast majority of entries is in the diagonal blocks (i.e. looking at the rows owned by a core, there is a very dense square-shaped region around the diagonal and some loosely scattered nozeroes in the other columns). >> - The actual matrix from which I need eigenvalues is a 2x2 block matrix, saved as MATNEST - matrix. Each of these four matrices is computed based on the stiffness matrix and has a similar size and nonzero pattern. For a mesh of 200k dofs, one such matrix has a size of about 174kx174k and on average about 40 nonzeroes per row. >> - I use the default Krylov-Schur solver and look for the 100 smallest eigenvalues >> - The output of -log_view for the 200k-dof - mesh described above run on 128 cores is at the end of this mail. >> >> I noticed that the problem matrices are not perfectly balanced, i.e. the number of rows per core might vary between 2500 and 3000, for example. But I am not sure if this is the main reason for the poor speedup. >> >> I tried to reduce the subspace size but without effect. I also attempted to use the shift-and-invert spectral transformation but the MATNEST-type prevents this. >> >> Are there any suggestions to improve the speedup further or is this the maximum speedup that I can expect? >> >> Can you also give us the performance for this problem on one node using the same number of cores per node? Then we can calculate speedup >> and look at which functions are not speeding up. >> >> Thanks, >> >> Matt >> >> Thanks a lot in advance, >> >> Andreas Walker >> >> m&m group >> D-MAVT >> ETH Zurich >> >> ************************************************************************************************************************ >> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** >> ************************************************************************************************************************ >> >> ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- >> >> ./Solver on a named eu-g1-050-2 with 128 processors, by awalker Thu Apr 30 15:50:22 2020 >> Using Petsc Release Version 3.10.5, Mar, 28, 2019 >> >> Max Max/Min Avg Total >> Time (sec): 6.209e+02 1.000 6.209e+02 >> Objects: 6.068e+05 1.001 6.063e+05 >> Flop: 9.230e+11 1.816 7.212e+11 9.231e+13 >> Flop/sec: 1.487e+09 1.816 1.161e+09 1.487e+11 >> MPI Messages: 1.451e+07 2.999 8.265e+06 1.058e+09 >> MPI Message Lengths: 6.062e+09 2.011 5.029e+02 5.321e+11 >> MPI Reductions: 1.512e+06 1.000 >> >> Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) >> e.g., VecAXPY() for real vectors of length N --> 2N flop >> and VecAXPY() for complex vectors of length N --> 8N flop >> >> Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- >> Avg %Total Avg %Total Count %Total Avg %Total Count %Total >> 0: Main Stage: 6.2090e+02 100.0% 9.2309e+13 100.0% 1.058e+09 100.0% 5.029e+02 100.0% 1.512e+06 100.0% >> >> ------------------------------------------------------------------------------------------------------------------------ >> See the 'Profiling' chapter of the users' manual for details on interpreting output. >> Phase summary info: >> Count: number of times phase was executed >> Time and Flop: Max - maximum over all processors >> Ratio - ratio of maximum to minimum over all processors >> Mess: number of messages sent >> AvgLen: average message length (bytes) >> Reduct: number of global reductions >> Global: entire computation >> Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). >> %T - percent time in this phase %F - percent flop in this phase >> %M - percent messages in this phase %L - percent message lengths in this phase >> %R - percent reductions in this phase >> Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) >> ------------------------------------------------------------------------------------------------------------------------ >> Event Count Time (sec) Flop --- Global --- --- Stage ---- Total >> Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >> ------------------------------------------------------------------------------------------------------------------------ >> >> --- Event Stage 0: Main Stage >> >> BuildTwoSided 20 1.0 2.3249e-01 2.2 0.00e+00 0.0 2.2e+04 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> BuildTwoSidedF 317 1.0 8.5016e-01 4.8 0.00e+00 0.0 2.1e+04 1.4e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatMult 150986 1.0 2.1963e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 1.2e+06 31 9100100 80 31 9100100 80 37007 >> MatMultAdd 603944 1.0 1.6209e+02 1.4 8.07e+10 1.8 1.1e+09 5.0e+02 0.0e+00 23 9100100 0 23 9100100 0 50145 >> MatConvert 30 1.0 1.6488e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatScale 10 1.0 1.0347e-03 3.9 6.68e+05 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 65036 >> MatAssemblyBegin 916 1.0 8.6715e-01 1.4 0.00e+00 0.0 2.1e+04 1.4e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatAssemblyEnd 916 1.0 2.0682e-01 1.1 0.00e+00 0.0 4.7e+05 1.3e+02 1.5e+03 0 0 0 0 0 0 0 0 0 0 0 >> MatZeroEntries 42 1.0 7.2787e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatView 10 1.0 1.4816e+00 1.0 0.00e+00 0.0 6.4e+03 1.3e+05 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 >> MatAXPY 40 1.0 1.0752e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatTranspose 80 1.0 3.0198e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatMatMult 60 1.0 3.0391e-01 1.0 7.82e+06 1.6 3.8e+05 2.8e+02 7.8e+02 0 0 0 0 0 0 0 0 0 0 2711 >> MatMatMultSym 60 1.0 2.4238e-01 1.0 0.00e+00 0.0 3.3e+05 2.4e+02 7.2e+02 0 0 0 0 0 0 0 0 0 0 0 >> MatMatMultNum 60 1.0 5.8508e-02 1.0 7.82e+06 1.6 4.7e+04 5.7e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 14084 >> MatPtAP 40 1.0 4.5617e-01 1.0 1.59e+07 1.6 3.3e+05 1.0e+03 6.4e+02 0 0 0 0 0 0 0 0 0 0 3649 >> MatPtAPSymbolic 40 1.0 2.6002e-01 1.0 0.00e+00 0.0 1.7e+05 6.5e+02 2.8e+02 0 0 0 0 0 0 0 0 0 0 0 >> MatPtAPNumeric 40 1.0 1.9293e-01 1.0 1.59e+07 1.6 1.5e+05 1.5e+03 3.2e+02 0 0 0 0 0 0 0 0 0 0 8629 >> MatTrnMatMult 40 1.0 2.3801e-01 1.0 6.09e+06 1.8 1.8e+05 1.0e+03 6.4e+02 0 0 0 0 0 0 0 0 0 0 2442 >> MatTrnMatMultSym 40 1.0 1.6962e-01 1.0 0.00e+00 0.0 1.7e+05 4.4e+02 6.4e+02 0 0 0 0 0 0 0 0 0 0 0 >> MatTrnMatMultNum 40 1.0 6.9000e-02 1.0 6.09e+06 1.8 9.7e+03 1.1e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 8425 >> MatGetLocalMat 240 1.0 4.9149e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatGetBrAoCol 160 1.0 2.0470e-02 1.6 0.00e+00 0.0 3.3e+05 4.1e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatTranspose_SeqAIJ_FAST 80 1.0 2.9940e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> Mesh Partition 1 1.0 1.4825e+00 1.0 0.00e+00 0.0 9.8e+04 6.9e+01 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> Mesh Migration 1 1.0 3.6680e-02 1.0 0.00e+00 0.0 1.5e+03 1.4e+04 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> DMPlexDistribute 1 1.0 1.5269e+00 1.0 0.00e+00 0.0 1.0e+05 3.5e+02 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 >> DMPlexDistCones 1 1.0 1.8845e-02 1.2 0.00e+00 0.0 1.0e+03 1.7e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> DMPlexDistLabels 1 1.0 9.7280e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> DMPlexDistData 1 1.0 3.1499e-01 1.4 0.00e+00 0.0 9.8e+04 4.3e+01 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> DMPlexStratify 2 1.0 9.3421e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> DMPlexPrealloc 2 1.0 3.5980e-02 1.0 0.00e+00 0.0 4.0e+04 1.8e+03 3.0e+01 0 0 0 0 0 0 0 0 0 0 0 >> SFSetGraph 20 1.0 1.6069e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> SFSetUp 20 1.0 2.8043e-01 1.9 0.00e+00 0.0 6.7e+04 5.0e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> SFBcastBegin 25 1.0 3.9653e-02 2.5 0.00e+00 0.0 6.1e+04 4.9e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> SFBcastEnd 25 1.0 9.0128e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> SFReduceBegin 10 1.0 4.3473e-04 5.5 0.00e+00 0.0 7.4e+03 4.0e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> SFReduceEnd 10 1.0 5.7962e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> SFFetchOpBegin 2 1.0 1.6069e-0434.7 0.00e+00 0.0 1.8e+03 4.4e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> SFFetchOpEnd 2 1.0 8.9251e-04 2.6 0.00e+00 0.0 1.8e+03 4.4e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecSet 302179 1.0 1.3128e+00 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecAssemblyBegin 1 1.0 1.3844e-03 7.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecAssemblyEnd 1 1.0 3.4710e-05 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecScatterBegin 603945 1.0 2.2874e+01 4.4 0.00e+00 0.0 1.1e+09 5.0e+02 1.0e+00 2 0100100 0 2 0100100 0 0 >> VecScatterEnd 603944 1.0 8.2651e+01 4.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 >> VecSetRandom 11 1.0 2.7061e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> EPSSetUp 10 1.0 5.0371e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+01 0 0 0 0 0 0 0 0 0 0 0 >> EPSSolve 10 1.0 6.1329e+02 1.0 9.23e+11 1.8 1.1e+09 5.0e+02 1.5e+06 99100100100100 99100100100100 150509 >> STSetUp 10 1.0 2.5475e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> STApply 150986 1.0 2.1997e+02 1.3 8.07e+10 1.8 1.1e+09 5.0e+02 1.2e+06 31 9100100 80 31 9100100 80 36950 >> BVCopy 1791 1.0 5.1953e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> BVMultVec 301925 1.0 1.5007e+02 3.1 3.31e+11 1.8 0.0e+00 0.0e+00 0.0e+00 14 36 0 0 0 14 36 0 0 0 220292 >> BVMultInPlace 1801 1.0 8.0080e+00 1.8 1.78e+11 1.8 0.0e+00 0.0e+00 0.0e+00 1 19 0 0 0 1 19 0 0 0 2222543 >> BVDotVec 301925 1.0 3.2807e+02 1.4 3.33e+11 1.8 0.0e+00 0.0e+00 3.0e+05 47 36 0 0 20 47 36 0 0 20 101409 >> BVOrthogonalizeV 150996 1.0 4.0292e+02 1.1 6.64e+11 1.8 0.0e+00 0.0e+00 3.0e+05 62 72 0 0 20 62 72 0 0 20 164619 >> BVScale 150996 1.0 4.1660e-01 3.2 5.27e+08 1.8 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 126494 >> BVSetRandom 10 1.0 2.5061e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> DSSolve 1801 1.0 2.0764e+01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 >> DSVectors 2779 1.0 1.2691e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> DSOther 1801 1.0 1.2944e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 >> ------------------------------------------------------------------------------------------------------------------------ >> >> Memory usage is given in bytes: >> >> Object Type Creations Destructions Memory Descendants' Mem. >> Reports information only for process 0. >> >> --- Event Stage 0: Main Stage >> >> Container 1 1 584 0. >> Distributed Mesh 6 6 29160 0. >> GraphPartitioner 2 2 1244 0. >> Matrix 1104 1104 136615232 0. >> Index Set 930 930 9125912 0. >> IS L to G Mapping 3 3 2235608 0. >> Section 28 26 18720 0. >> Star Forest Graph 30 30 25632 0. >> Discrete System 6 6 5616 0. >> PetscRandom 11 11 7194 0. >> Vector 604372 604372 8204816368 0. >> Vec Scatter 203 203 272192 0. >> Viewer 21 10 8480 0. >> EPS Solver 10 10 86360 0. >> Spectral Transform 10 10 8400 0. >> Basis Vectors 10 10 530848 0. >> Region 10 10 6800 0. >> Direct Solver 10 10 9838880 0. >> Krylov Solver 10 10 13920 0. >> Preconditioner 10 10 10080 0. >> ======================================================================================================================== >> Average time to get PetscTime(): 3.49944e-08 >> Average time for MPI_Barrier(): 5.842e-06 >> Average time for zero size MPI_Send(): 8.72551e-06 >> #PETSc Option Table entries: >> -config=benchmark3.json >> -log_view >> #End of PETSc Option Table entries >> Compiled without FORTRAN kernels >> Compiled with full precision matrices (default) >> sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 >> Configure options: --prefix=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" FFLAGS= CXXFLAGS="-ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2" --with-cc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc --with-cxx=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpic++ --with-fc=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 --with-precision=double --with-scalar-type=real --with-shared-libraries=1 --with-debugging=0 --with-64-bit-indices=0 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib/libopenblas.so --with-x=0 --with-cxx-dialect=C++11 --with-boost=1 --with-clanguage=C --with-scalapack-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib/libscalapack.so --with-scalapack=1 --with-metis=1 --with-metis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk --with-hdf5=1 --with-hdf5-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5 --with-hypre=1 --with-hypre-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne --with-parmetis=1 --with-parmetis-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4 --with-mumps=1 --with-mumps-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b --with-trilinos=1 --with-trilinos-dir=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo --with-fftw=0 --with-cxx-dialect=C++11 --with-superlu_dist-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include --with-superlu_dist-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib/libsuperlu_dist.a --with-superlu_dist=1 --with-suitesparse-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include --with-suitesparse-lib="/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libumfpack.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libklu.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcholmod.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libbtf.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libccolamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcolamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libcamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libamd.so /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib/libsuitesparseconfig.so /lib64/librt.so" --with-suitesparse=1 --with-zlib-include=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include --with-zlib-lib=/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib/libz.so --with-zlib=1 >> ----------------------------------------- >> Libraries compiled on 2020-01-22 15:21:53 on eu-c7-051-02 >> Machine characteristics: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core >> Using PETSc directory: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit >> Using PETSc arch: >> ----------------------------------------- >> >> Using C compiler: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc -ftree-vectorize -O2 -march=core-avx2 -fPIC -mavx2 >> Using Fortran compiler: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 >> ----------------------------------------- >> >> Using include paths: -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/include -I/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/include >> ----------------------------------------- >> >> Using C linker: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpicc >> Using Fortran linker: /cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/bin/mpif90 >> Using libraries: -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/petsc-3.10.5-3czpbqhprn65yalty4o46knmhytixlit/lib -lpetsc -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/trilinos-12.14.1-hcdtxkqirqt6wkui3vkie5qse64payqo/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/mumps-5.1.1-36fzslrywwsg7gxnoxbjbzwuz6o74n6b/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/netlib-scalapack-2.0.2-bq6sqixlc4zwxpfrtbu7jt7twhps5ldv/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/suite-sparse-5.1.0-sk4v2rs7dfpese3zgsyigwtv2w66v2gz/lib /lib64/librt.so -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/superlu-dist-6.1.1-ejpmx43wk4vplnmry5n5njvgqvcvfe6x/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hypre-2.14.0-ly5dmcaty5wx4opqwspvoim6zss6sxne/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openblas-0.2.20-cot3cawsqf4pkxjwzjexaykbwn2ch3ii/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hdf5-1.10.1-sbxt5qlg2pojshva2b6kdflsy64i4rs5/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/parmetis-4.0.3-ik3r6faxeb6uzyywppuc2niuvivwiux4/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/metis-5.1.0-bqbfmcvyqigdaeetkg6fuhdh4eplu3fk/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/zlib-1.2.11-bu2rglshnlxrwc24334r76jr34jm2fxy/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/hwloc-1.11.9-a436y6rdahnn57u6oe6snwemjhcfmrso/lib -Wl,-rpath,/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib -L/cluster/apps/lsf/10.1/linux2.6-glibc2.3-x86_64/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-6.3.0/openmpi-3.0.1-k6n5k3l3baqlkdw3w7il7dwb6wilr6r6/lib -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib:/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib/gcc/x86_64-pc-linux-gnu/6.3.0 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib64 -Wl,-rpath,/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib -L/cluster/spack/apps/linux-centos7-x86_64/gcc-4.8.5/gcc-6.3.0-sqhtfh32p5gerbkvi5hih7cfvcpmewvj/lib -lmuelu-adapters -lmuelu-interface -lmuelu -lstratimikos -lstratimikosbelos -lstratimikosaztecoo -lstratimikosamesos -lstratimikosml -lstratimikosifpack -lModeLaplace -lanasaziepetra -lanasazi -lmapvarlib -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus -lmapvarlib -lsuplib_cpp -lsuplib_c -lsuplib -lsupes -laprepro_lib -lchaco -lio_info_lib -lIonit -lIotr -lIohb -lIogs -lIogn -lIovs -lIopg -lIoexo_fac -lIopx -lIofx -lIoex -lIoss -lnemesis -lexoIIv2for32 -lexodus_for -lexodus -lbelosxpetra -lbelosepetra -lbelos -lml -lifpack -lpamgen_extras -lpamgen -lamesos -lgaleri-xpetra -lgaleri-epetra -laztecoo -lisorropia -lxpetra-sup -lxpetra -lthyraepetraext -lthyraepetra -lthyracore -lthyraepetraext -lthyraepetra -lthyracore -lepetraext -ltrilinosss -ltriutils -lzoltan -lepetra -lsacado -lrtop -lkokkoskernels -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchosparser -lteuchoscore -lteuchoskokkoscomm -lteuchoskokkoscompat -lteuchosremainder -lteuchosnumerics -lteuchoscomm -lteuchosparameterlist -lteuchosparser -lteuchoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -lkokkosalgorithms -lkokkoscontainers -lkokkoscore -lgtest -lpthread -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu_dist -lHYPRE -lopenblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lstdc++ -ldl >> ----------------------------------------- >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > From rlmackie862 at gmail.com Thu Apr 30 19:46:34 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Thu, 30 Apr 2020 17:46:34 -0700 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> <4035EB5E-DA6B-4995-8710-B062C8663D41@gmail.com> Message-ID: <4BD0AEFB-8320-43A2-80B9-E7E4326B12BB@gmail.com> Hi Junchao, Unfortunately these modifications did not work on our cluster (see output below). However, I am not asking you to spend anymore time on this, as we are able to avoid the problem by setting appropriate sysctl parameters into /etc/sysctl.conf. Thank you again for all your help on this. Randy Output of test program: mpiexec -np 1280 -hostfile machines ./test -nsubs 160 -nx 100 -ny 100 -nz 10 -max_pending_isends 64 Started ind2 max 31999999 nis 33600 begin VecScatter create [1175]PETSC ERROR: #1 PetscCommBuildTwoSided_Ibarrier() line 102 in /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/sys/utils/mpits.c [1175]PETSC ERROR: #2 PetscCommBuildTwoSided() line 313 in /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/sys/utils/mpits.c [1175]PETSC ERROR: #3 PetscSFSetUp_Basic() line 33 in /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/is/sf/impls/basic/sfbasic.c [1175]PETSC ERROR: #4 PetscSFSetUp() line 253 in /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/is/sf/interface/sf.c [1175]PETSC ERROR: #5 VecScatterSetUp_SF() line 747 in /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/vscat/impls/sf/vscatsf.c [1175]PETSC ERROR: #6 VecScatterSetUp() line 208 in /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/vscat/interface/vscatfce.c [1175]PETSC ERROR: #7 VecScatterCreate() line 287 in /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/vscat/interface/vscreate.c > On Apr 27, 2020, at 9:59 AM, Junchao Zhang wrote: > > Randy, > You are absolutely right. The AOApplicationToPetsc could not be removed. Since the excessive communication is inevitable, I made two changes in petsc to ease that. One is I skewed the communication to let each rank send to ranks greater than itself first. The other is an option, -max_pending_isend, to control number of pending isends. Current default is 512. > I have an MR at https://gitlab.com/petsc/petsc/-/merge_requests/2757 . I tested it dozens of times with your example at 5120 ranks. It worked fine. > Please try it in your environment and let me know the result. Since the failure is random, you may need to run multiple times. > > BTW, if no objection, I'd like to add your excellent example to petsc repo. > > Thanks > --Junchao Zhang > > > On Fri, Apr 24, 2020 at 5:32 PM Randall Mackie > wrote: > Hi Junchao, > > I tested by commenting out the AOApplicationToPetsc calls as you suggest, but it doesn?t work because it doesn?t maintain the proper order of the elements in the scattered vectors. > > I attach a modified version of the test code where I put elements into the global vector, then carry out the scatter, and check on the subcomms that they are correct. > > You can see everything is fine with the AOApplicationToPetsc calls, but the comparison fails when those are commented out. > > If there is some way I can achieve the right VecScatters without those calls, I would be happy to know how to do that. > > Thank you again for your help. > > Randy > > ps. I suggest you run this test with nx=ny=nz=10 and only a couple subcomms and maybe 4 processes to demonstrate the behavior > > >> On Apr 20, 2020, at 2:45 PM, Junchao Zhang > wrote: >> >> Hello, Randy, >> I further looked at the problem and believe it was due to overwhelming traffic. The code sometimes fails at MPI_Waitall. I printed out MPI error strings of bad MPI Statuses. One of them is like "MPID_nem_tcp_connpoll(1845): Communication error with rank 25: Connection reset by peer", which is a tcp error and has nothing to do with petsc. >> Further investigation shows in the case of 5120 ranks with 320 sub communicators, during VecScatterSetUp, each rank has around 640 isends/irecvs neighbors, and quite a few ranks has 1280 isends neighbors. I guess these overwhelming isends occasionally crashed the connection. >> The piece of code in VecScatterSetUp is to calculate the communication pattern. With index sets "having good locality", the calculate itself incurs less traffic. Here good locality means indices in an index set mostly point to local entries. However, the AOApplicationToPetsc() call in your code unnecessarily ruined the good petsc ordering. If we remove AOApplicationToPetsc() (the vecscatter result is still correct) , then each rank uniformly has around 320 isends/irecvs. >> So, test with this modification and see if it really works in your environment. If not applicable, we can provide options in petsc to carry out the communication in phases to avoid flooding the network (though it is better done by MPI). >> >> Thanks. >> --Junchao Zhang >> >> >> On Fri, Apr 17, 2020 at 10:47 AM Randall Mackie > wrote: >> Hi Junchao, >> >> Thank you for your efforts. >> We tried petsc-3.13.0 but it made no difference. >> We think now the issue are with sysctl parameters, and increasing those seemed to have cleared up the problem. >> This also most likely explains how different clusters had different behaviors with our test code. >> >> We are now running our code and will report back once we are sure that there are no further issues. >> >> Thanks again for your help. >> >> Randy M. >> >>> On Apr 17, 2020, at 8:09 AM, Junchao Zhang > wrote: >>> >>> >>> >>> >>> On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang > wrote: >>> Randy, >>> I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also found the error went away with petsc-3.13. However, I have not figured out what is the bug and which commit fixed it :). >>> So at your side, it is better to use the latest petsc. >>> I want to add that even with petsc-3.12.4 the error is random. I was only able to reproduce the error once, so I can not claim petsc-3.13 actually fixed it (or, the bug is really in petsc). >>> >>> --Junchao Zhang >>> >>> >>> On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang > wrote: >>> Randy, >>> Up to now I could not reproduce your error, even with the biggest mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 >>> While I continue doing test, you can try other options. It looks you want to duplicate a vector to subcomms. I don't think you need the two lines: >>> call AOApplicationToPetsc(aoParent,nis,ind1,ierr) >>> call AOApplicationToPetsc(aoSub,nis,ind2,ierr) >>> In addition, you can use simpler and more memory-efficient index sets. There is a petsc example for this task, see case 3 in https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c >>> BTW, it is good to use petsc master so we are on the same page. >>> --Junchao Zhang >>> >>> >>> On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie > wrote: >>> Hi Junchao, >>> >>> So I was able to create a small test code that duplicates the issue we have been having, and it is attached to this email in a zip file. >>> Included is the test.F90 code, the commands to duplicate crash and to duplicate a successful run, output errors, and our petsc configuration. >>> >>> Our findings to date include: >>> >>> The error is reproducible in a very short time with this script >>> It is related to nproc*nsubs and (although to a less extent) to DM grid size >>> It happens regardless of MPI implementation (mpich, intel mpi 2018, 2019, openmpi) or compiler (gfortran/gcc , intel 2018) >>> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to slightly increase the limit, but still fails on the full machine set. >>> Nothing looks interesting on valgrind >>> >>> Our initial tests were carried out on an Azure cluster, but we also tested on our smaller cluster, and we found the following: >>> >>> Works: >>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>> >>> Crashes (this works on Azure) >>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>> >>> So it looks like it may also be related to the physical number of nodes as well. >>> >>> In any case, even with 2560 processes on 192 cores the memory does not go above 3.5 Gbyes so you don?t need a huge cluster to test. >>> >>> Thanks, >>> >>> Randy M. >>> >>> >>> >>>> On Apr 14, 2020, at 12:23 PM, Junchao Zhang > wrote: >>>> >>>> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why I doubted it was the problem. Even if users configure petsc with 64-bit indices, we use PetscMPIInt in MPI calls. So it is not a problem. >>>> Try -vecscatter_type mpi1 to restore to the original VecScatter implementation. If the problem still remains, could you provide a test example for me to debug? >>>> >>>> --Junchao Zhang >>>> >>>> >>>> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie > wrote: >>>> Hi Junchao, >>>> >>>> We have tried your two suggestions but the problem remains. >>>> And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce. >>>> >>>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking the problem must be elsewhere and not MPI. >>>> >>>> Give that this is a 64 bit indices build of PETSc, is there some possible incompatibility between PETSc and MPI calls? >>>> >>>> We are open to any other possible suggestions to try as other than valgrind on thousands of processes we seem to have run out of ideas. >>>> >>>> Thanks, Randy M. >>>> >>>>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang > wrote: >>>>> >>>>> >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang > wrote: >>>>> Randy, >>>>> Someone reported similar problem before. It turned out an Intel MPI MPI_Allreduce bug. A workaround is setting the environment variable I_MPI_ADJUST_ALLREDUCE=1.arr >>>>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>>>> But you mentioned mpich also had the error. So maybe the problem is not the same. So let's try the workaround first. If it doesn't work, add another petsc option -build_twosided allreduce, which is a workaround for Intel MPI_Ibarrier bugs we met. >>>>> Thanks. >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie > wrote: >>>>> Dear PETSc users, >>>>> >>>>> We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. >>>>> This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. >>>>> >>>>> The run generates the following messages, but doesn?t crash, just seems to hang with all processes continuing to show activity: >>>>> >>>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>>> >>>>> >>>>> Looking at line 117 in PetscGatherMessageLengths we find the offending statement is the MPI_Isend: >>>>> >>>>> >>>>> /* Post the Isends with the message length-info */ >>>>> for (i=0,j=0; i>>>> if (ilengths[i]) { >>>>> ierr = MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>>> j++; >>>>> } >>>>> } >>>>> >>>>> We have tried this with Intel MPI 2018, 2019, and mpich, all giving the same problem. >>>>> >>>>> We suspect there is some limit being set on this cloud cluster on the number of file connections or something, but we don?t know. >>>>> >>>>> Anyone have any ideas? We are sort of grasping for straws at this point. >>>>> >>>>> Thanks, Randy M. >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Thu Apr 30 20:29:58 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 30 Apr 2020 20:29:58 -0500 Subject: [petsc-users] MPI error for large number of processes and subcomms In-Reply-To: <4BD0AEFB-8320-43A2-80B9-E7E4326B12BB@gmail.com> References: <0A99C1D7-4AA5-4F14-8EB2-8108C8F58423@gmail.com> <04985859-7590-4514-9526-FC8FB1A30EB2@gmail.com> <4035EB5E-DA6B-4995-8710-B062C8663D41@gmail.com> <4BD0AEFB-8320-43A2-80B9-E7E4326B12BB@gmail.com> Message-ID: I guess you can fix that with an additional option, -build_twosided allreduce We have two algorithms for PetscCommBuildTwoSided: ibarrier (when # of ranks > 1024) and allreduce (otherwise). The flow control with ibarrier is much weaker than that in allreduce. Though in my tests, they both worked. Thanks. --Junchao Zhang On Thu, Apr 30, 2020 at 7:46 PM Randall Mackie wrote: > Hi Junchao, > > Unfortunately these modifications did not work on our cluster (see output > below). > However, I am not asking you to spend anymore time on this, as we are able > to avoid the problem by setting appropriate sysctl parameters into > /etc/sysctl.conf. > > Thank you again for all your help on this. > > Randy > > > Output of test program: > > mpiexec -np 1280 -hostfile machines ./test -nsubs 160 -nx 100 -ny 100 -nz > 10 -max_pending_isends 64 > Started > > ind2 max 31999999 > nis 33600 > begin VecScatter create > [1175]PETSC ERROR: #1 PetscCommBuildTwoSided_Ibarrier() line 102 in > /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/sys/utils/mpits.c > [1175]PETSC ERROR: #2 PetscCommBuildTwoSided() line 313 in > /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/sys/utils/mpits.c > [1175]PETSC ERROR: #3 PetscSFSetUp_Basic() line 33 in > /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/is/sf/impls/basic/sfbasic.c > [1175]PETSC ERROR: #4 PetscSFSetUp() line 253 in > /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/is/sf/interface/sf.c > [1175]PETSC ERROR: #5 VecScatterSetUp_SF() line 747 in > /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/vscat/impls/sf/vscatsf.c > [1175]PETSC ERROR: #6 VecScatterSetUp() line 208 in > /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/vscat/interface/vscatfce.c > [1175]PETSC ERROR: #7 VecScatterCreate() line 287 in > /state/std2/FEMI/PETSc/petsc-jczhang-throttle-pending-isends/src/vec/vscat/interface/vscreate.c > > > > On Apr 27, 2020, at 9:59 AM, Junchao Zhang > wrote: > > Randy, > You are absolutely right. The AOApplicationToPetsc could not be > removed. Since the excessive communication is inevitable, I made two > changes in petsc to ease that. One is I skewed the communication to let > each rank send to ranks greater than itself first. The other is an option, > -max_pending_isend, to control number of pending isends. Current default is > 512. > I have an MR at https://gitlab.com/petsc/petsc/-/merge_requests/2757. > I tested it dozens of times with your example at 5120 ranks. It worked fine. > Please try it in your environment and let me know the result. Since the > failure is random, you may need to run multiple times. > > BTW, if no objection, I'd like to add your excellent example to petsc > repo. > > Thanks > --Junchao Zhang > > > On Fri, Apr 24, 2020 at 5:32 PM Randall Mackie > wrote: > >> Hi Junchao, >> >> I tested by commenting out the AOApplicationToPetsc calls as you suggest, >> but it doesn?t work because it doesn?t maintain the proper order of the >> elements in the scattered vectors. >> >> I attach a modified version of the test code where I put elements into >> the global vector, then carry out the scatter, and check on the subcomms >> that they are correct. >> >> You can see everything is fine with the AOApplicationToPetsc calls, but >> the comparison fails when those are commented out. >> >> If there is some way I can achieve the right VecScatters without those >> calls, I would be happy to know how to do that. >> >> Thank you again for your help. >> >> Randy >> >> ps. I suggest you run this test with nx=ny=nz=10 and only a couple >> subcomms and maybe 4 processes to demonstrate the behavior >> >> >> On Apr 20, 2020, at 2:45 PM, Junchao Zhang >> wrote: >> >> Hello, Randy, >> I further looked at the problem and believe it was due to overwhelming >> traffic. The code sometimes fails at MPI_Waitall. I printed out MPI error >> strings of bad MPI Statuses. One of them is like >> "MPID_nem_tcp_connpoll(1845): Communication error with rank 25: Connection >> reset by peer", which is a tcp error and has nothing to do with petsc. >> Further investigation shows in the case of 5120 ranks with 320 sub >> communicators, during VecScatterSetUp, each rank has around 640 >> isends/irecvs neighbors, and quite a few ranks has 1280 isends neighbors. I >> guess these overwhelming isends occasionally crashed the connection. >> The piece of code in VecScatterSetUp is to calculate the communication >> pattern. With index sets "having good locality", the calculate itself >> incurs less traffic. Here good locality means indices in an index set >> mostly point to local entries. However, the AOApplicationToPetsc() call in >> your code unnecessarily ruined the good petsc ordering. If we remove >> AOApplicationToPetsc() (the vecscatter result is still correct) , then each >> rank uniformly has around 320 isends/irecvs. >> So, test with this modification and see if it really works in your >> environment. If not applicable, we can provide options in petsc to carry >> out the communication in phases to avoid flooding the network (though it is >> better done by MPI). >> >> Thanks. >> --Junchao Zhang >> >> >> On Fri, Apr 17, 2020 at 10:47 AM Randall Mackie >> wrote: >> >>> Hi Junchao, >>> >>> Thank you for your efforts. >>> We tried petsc-3.13.0 but it made no difference. >>> We think now the issue are with sysctl parameters, and increasing those >>> seemed to have cleared up the problem. >>> This also most likely explains how different clusters had different >>> behaviors with our test code. >>> >>> We are now running our code and will report back once we are sure that >>> there are no further issues. >>> >>> Thanks again for your help. >>> >>> Randy M. >>> >>> On Apr 17, 2020, at 8:09 AM, Junchao Zhang >>> wrote: >>> >>> >>> >>> >>> On Thu, Apr 16, 2020 at 11:13 PM Junchao Zhang >>> wrote: >>> >>>> Randy, >>>> I reproduced your error with petsc-3.12.4 and 5120 mpi ranks. I also >>>> found the error went away with petsc-3.13. However, I have not figured out >>>> what is the bug and which commit fixed it :). >>>> So at your side, it is better to use the latest petsc. >>>> >>> I want to add that even with petsc-3.12.4 the error is random. I was >>> only able to reproduce the error once, so I can not claim petsc-3.13 >>> actually fixed it (or, the bug is really in petsc). >>> >>> >>>> --Junchao Zhang >>>> >>>> >>>> On Thu, Apr 16, 2020 at 9:06 PM Junchao Zhang >>>> wrote: >>>> >>>>> Randy, >>>>> Up to now I could not reproduce your error, even with the biggest >>>>> mpirun -n 5120 ./test -nsubs 320 -nx 100 -ny 100 -nz 100 >>>>> While I continue doing test, you can try other options. It looks you >>>>> want to duplicate a vector to subcomms. I don't think you need the two >>>>> lines: >>>>> >>>>> call AOApplicationToPetsc(aoParent,nis,ind1,ierr) >>>>> call AOApplicationToPetsc(aoSub,nis,ind2,ierr) >>>>> >>>>> In addition, you can use simpler and more memory-efficient index >>>>> sets. There is a petsc example for this task, see case 3 in >>>>> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c >>>>> BTW, it is good to use petsc master so we are on the same page. >>>>> --Junchao Zhang >>>>> >>>>> >>>>> On Wed, Apr 15, 2020 at 10:28 AM Randall Mackie >>>>> wrote: >>>>> >>>>>> Hi Junchao, >>>>>> >>>>>> So I was able to create a small test code that duplicates the issue >>>>>> we have been having, and it is attached to this email in a zip file. >>>>>> Included is the test.F90 code, the commands to duplicate crash and to >>>>>> duplicate a successful run, output errors, and our petsc configuration. >>>>>> >>>>>> Our findings to date include: >>>>>> >>>>>> The error is reproducible in a very short time with this script >>>>>> It is related to nproc*nsubs and (although to a less extent) to DM >>>>>> grid size >>>>>> It happens regardless of MPI implementation (mpich, intel mpi 2018, >>>>>> 2019, openmpi) or compiler (gfortran/gcc , intel 2018) >>>>>> No effect changing vecscatter_type to mpi1 or mpi3. Mpi1 seems to >>>>>> slightly increase the limit, but still fails on the full machine set. >>>>>> Nothing looks interesting on valgrind >>>>>> >>>>>> Our initial tests were carried out on an Azure cluster, but we also >>>>>> tested on our smaller cluster, and we found the following: >>>>>> >>>>>> Works: >>>>>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1280 -hostfile hostfile >>>>>> ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>>>>> >>>>>> Crashes (this works on Azure) >>>>>> $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2560 -hostfile hostfile >>>>>> ./test -nsubs 80 -nx 100 -ny 100 -nz 100 >>>>>> >>>>>> So it looks like it may also be related to the physical number of >>>>>> nodes as well. >>>>>> >>>>>> In any case, even with 2560 processes on 192 cores the memory does >>>>>> not go above 3.5 Gbyes so you don?t need a huge cluster to test. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Randy M. >>>>>> >>>>>> >>>>>> >>>>>> On Apr 14, 2020, at 12:23 PM, Junchao Zhang >>>>>> wrote: >>>>>> >>>>>> There is an MPI_Allreduce in PetscGatherNumberOfMessages, that is why >>>>>> I doubted it was the problem. Even if users configure petsc with 64-bit >>>>>> indices, we use PetscMPIInt in MPI calls. So it is not a problem. >>>>>> Try -vecscatter_type mpi1 to restore to the original VecScatter >>>>>> implementation. If the problem still remains, could you provide a test >>>>>> example for me to debug? >>>>>> >>>>>> --Junchao Zhang >>>>>> >>>>>> >>>>>> On Tue, Apr 14, 2020 at 12:13 PM Randall Mackie < >>>>>> rlmackie862 at gmail.com> wrote: >>>>>> >>>>>>> Hi Junchao, >>>>>>> >>>>>>> We have tried your two suggestions but the problem remains. >>>>>>> And the problem seems to be on the MPI_Isend line 117 in >>>>>>> PetscGatherMessageLengths and not MPI_AllReduce. >>>>>>> >>>>>>> We have now tried Intel MPI, Mpich, and OpenMPI, and so are thinking >>>>>>> the problem must be elsewhere and not MPI. >>>>>>> >>>>>>> Give that this is a 64 bit indices build of PETSc, is there some >>>>>>> possible incompatibility between PETSc and MPI calls? >>>>>>> >>>>>>> We are open to any other possible suggestions to try as other than >>>>>>> valgrind on thousands of processes we seem to have run out of ideas. >>>>>>> >>>>>>> Thanks, Randy M. >>>>>>> >>>>>>> On Apr 13, 2020, at 8:54 AM, Junchao Zhang >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> --Junchao Zhang >>>>>>> >>>>>>> >>>>>>> On Mon, Apr 13, 2020 at 10:53 AM Junchao Zhang < >>>>>>> junchao.zhang at gmail.com> wrote: >>>>>>> >>>>>>>> Randy, >>>>>>>> Someone reported similar problem before. It turned out an Intel >>>>>>>> MPI MPI_Allreduce bug. A workaround is setting the environment variable >>>>>>>> I_MPI_ADJUST_ALLREDUCE=1.arr >>>>>>>> >>>>>>> Correct: I_MPI_ADJUST_ALLREDUCE=1 >>>>>>> >>>>>>>> But you mentioned mpich also had the error. So maybe the problem >>>>>>>> is not the same. So let's try the workaround first. If it doesn't work, add >>>>>>>> another petsc option -build_twosided allreduce, which is a workaround for >>>>>>>> Intel MPI_Ibarrier bugs we met. >>>>>>>> Thanks. >>>>>>>> --Junchao Zhang >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie < >>>>>>>> rlmackie862 at gmail.com> wrote: >>>>>>>> >>>>>>>>> Dear PETSc users, >>>>>>>>> >>>>>>>>> We are trying to understand an issue that has come up in running >>>>>>>>> our code on a large cloud cluster with a large number of processes and >>>>>>>>> subcomms. >>>>>>>>> This is code that we use daily on multiple clusters without >>>>>>>>> problems, and that runs valgrind clean for small test problems. >>>>>>>>> >>>>>>>>> The run generates the following messages, but doesn?t crash, just >>>>>>>>> seems to hang with all processes continuing to show activity: >>>>>>>>> >>>>>>>>> [492]PETSC ERROR: #1 PetscGatherMessageLengths() line 117 in >>>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/sys/utils/mpimesg.c >>>>>>>>> [492]PETSC ERROR: #2 VecScatterSetUp_SF() line 658 in >>>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/impls/sf/vscatsf.c >>>>>>>>> [492]PETSC ERROR: #3 VecScatterSetUp() line 209 in >>>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscatfce.c >>>>>>>>> [492]PETSC ERROR: #4 VecScatterCreate() line 282 in >>>>>>>>> /mnt/home/cgg/PETSc/petsc-3.12.4/src/vec/vscat/interface/vscreate.c >>>>>>>>> >>>>>>>>> >>>>>>>>> Looking at line 117 in PetscGatherMessageLengths we find the >>>>>>>>> offending statement is the MPI_Isend: >>>>>>>>> >>>>>>>>> >>>>>>>>> /* Post the Isends with the message length-info */ >>>>>>>>> for (i=0,j=0; i>>>>>>>> if (ilengths[i]) { >>>>>>>>> ierr = >>>>>>>>> MPI_Isend((void*)(ilengths+i),1,MPI_INT,i,tag,comm,s_waits+j);CHKERRQ(ierr); >>>>>>>>> j++; >>>>>>>>> } >>>>>>>>> } >>>>>>>>> >>>>>>>>> We have tried this with Intel MPI 2018, 2019, and mpich, all >>>>>>>>> giving the same problem. >>>>>>>>> >>>>>>>>> We suspect there is some limit being set on this cloud cluster on >>>>>>>>> the number of file connections or something, but we don?t know. >>>>>>>>> >>>>>>>>> Anyone have any ideas? We are sort of grasping for straws at this >>>>>>>>> point. >>>>>>>>> >>>>>>>>> Thanks, Randy M. >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: